Key Facts
- ✓ Ontario Digital Service declined to procure LLMs from Rosetta Labs.
- ✓ The proposed models were described as '98% safe' using Authority Boundary Ledger technology.
- ✓ The decision affects the data security of 15 million Canadians in Ontario.
Quick Summary
The Ontario Digital Service recently evaluated a proposal from Rosetta Labs regarding the procurement of Large Language Models (LLMs) described as '98% safe.' The proposal centered on the use of the Authority Boundary Ledger to secure these models. However, the service ultimately decided against proceeding with the acquisition.
Despite the high safety rating claimed by the vendor, the government agency identified critical shortcomings in the proposed technology. The Authority Boundary Ledger failed to meet the stringent security and operational standards required for government use. This decision highlights the rigorous scrutiny applied to AI technologies in the public sector, particularly concerning data integrity and security protocols. The rejection underscores the challenges private sector innovators face when attempting to align cutting-edge AI solutions with strict public sector compliance mandates.
The Procurement Proposal
Rosetta Labs presented an ambitious proposal to the Ontario Digital Service aimed at integrating advanced AI capabilities into government operations. The core of their pitch was a suite of Large Language Models (LLMs) that claimed a 98% safety rating. This figure was intended to reassure government officials about the reliability and security of the technology.
The vendor positioned these models as a solution to enhance public service efficiency while maintaining high security standards. The proposal specifically targeted the needs of the Ontario public sector, which handles sensitive data for approximately 15 million Canadians. The promise of a '98% safe' system was a major selling point in the initial discussions.
The Technology Behind 'Safety'
To achieve the claimed safety metrics, Rosetta Labs relied on a proprietary technology known as the Authority Boundary Ledger. This system was designed to create a secure environment for the LLMs, ostensibly preventing unauthorized data access and ensuring compliance with strict privacy regulations. The Authority Boundary Ledger functions as a specialized audit trail.
The technology purported to isolate sensitive data within a secure boundary, allowing the AI to process information without compromising the underlying privacy of the 15 million Canadians represented by the Ontario Digital Service. However, the theoretical safety of the Authority Boundary Ledger did not translate into practical viability for the government agency.
Reasons for Rejection
Despite the promising metrics, the Ontario Digital Service identified significant flaws in the Authority Boundary Ledger implementation. The agency determined that the technology did not meet the necessary operational and security thresholds for government deployment. The gap between the claimed '98% safety' and actual performance requirements proved too wide to bridge.
The rejection was based on the assessment that the Authority Boundary Ledger could not guarantee the absolute security required for handling citizen data. The Ontario Digital Service maintains a zero-tolerance policy for potential vulnerabilities in systems managing the data of 15 million Canadians. Consequently, the procurement process was halted.
Implications for AI in Government
The decision by the Ontario Digital Service serves as a cautionary tale for AI vendors targeting the public sector. It demonstrates that high safety claims, such as those associated with the Authority Boundary Ledger, are subject to intense scrutiny. The public sector requires robust, verifiable security measures rather than theoretical assurances.
For Rosetta Labs and similar entities, this outcome highlights the necessity of aligning product development with the rigorous compliance standards of government agencies. The rejection of the '98% safe' LLMs indicates that Ontario prioritizes proven security frameworks over experimental technologies when safeguarding the data of its 15 million residents.










