Legal
AI Policy
This Artificial Intelligence Policy (AI Policy) sets out Hephaistos Pty Ltd's approach to the design, development, deployment, and use of artificial intelligence in connection with its products and services, including the Murrai document review system.
The purpose of this AI Policy is to provide transparency regarding how AI is used within Hephaistos offerings, to define the boundaries of that use, and to demonstrate responsible governance, including robust efforts to mitigate bias, in regulated and high consequence contexts.
This AI Policy reflects Hephaistos' business strategy of applying AI to support human review, prioritisation, and decision making in complex technical documentation, rather than to replace accountable professionals or make autonomous determinations. The mechanism for surfacing issues is intended to be sufficiently transparent to enable effective human validation.
It is intended to support compliance with applicable laws, emerging AI governance standards, and client governance expectations. Hephaistos may update this AI Policy from time to time to reflect changes in technology, regulation, or industry practice.
This AI Policy should be read in conjunction with any acceptable use, security, or data handling policies issued by Hephaistos in relation to Murrai or other services.
1. Compliance with laws and regulatory expectations
Hephaistos will comply with applicable laws, regulations, and regulatory guidance relating to the development and use of AI systems, to the extent such laws apply to its services. This includes consideration of international AI governance frameworks and standards where relevant to clients operating in regulated environments.
Hephaistos maintains an ongoing process for monitoring and adapting to new applicable laws, regulations, and regulatory guidance relating to the development and use of AI systems.
Hephaistos does not position Murrai as a legally or safety determinative system and does not rely on AI to make decisions that require licensed professional judgment.
2. Responsible use and AI literacy
Hephaistos maintains internal practices to ensure that personnel involved in the design, operation, and deployment of AI systems have an appropriate level of AI literacy for their role.
AI within Murrai is used to assist human reviewers by surfacing potential priority issues, inconsistencies, or weakly structured statements in technical documents. Final interpretation, acceptance, or rejection of any output remains the responsibility of the human user.
Murrai is designed to support review discipline, not to replace professional accountability. Hephaistos designs Murrai to prevent misuse and maintains controls to ensure its deployment aligns strictly with the intended scope of review support.
3. Data sources and data handling
Hephaistos takes reasonable steps to ensure that data used in the development and operation of Murrai is handled lawfully and ethically.
Client documents processed by Murrai remain the responsibility of the client and are used solely for the purpose of delivering the agreed service. Hephaistos does not claim ownership of client content and implements controls to limit data use to its intended scope.
Hephaistos implements controls and policies governing the secure retention and verifiable deletion of client content upon service completion, in line with contractual agreements.
Where third party tools or models are used, Hephaistos assesses data handling and licensing considerations as part of its risk management process.
4. Technical and organisational controls
Hephaistos implements technical and organisational measures to govern the use of AI within Murrai and related services. These measures include internal policies governing acceptable AI use, security controls appropriate to the sensitivity of the information processed, testing to assess consistency, suitability of outputs, and potential for unintended bias for review support purposes, and change management processes to ensure that modifications to AI behaviour do not expand scope or risk unintentionally.
Murrai is intentionally constrained to prioritisation and commentary rather than prescriptive or autonomous output.
5. Transparency and human oversight
Hephaistos is transparent about the use of AI within Murrai. Users are informed that they are interacting with an AI assisted system and that outputs are advisory in nature.
Murrai highlights areas for human attention and provides explanatory commentary to support review, but it does not assert correctness or compliance.
Deviations from Murrai flagged items are permitted and expected to be resolved through human judgment, with accountability retained by the reviewing professional.
6. Technical and organisational controls for fairness
Hephaistos is committed to the responsible design of Murrai. As part of its technical and organisational controls, Hephaistos conducts assessments to identify and mitigate reasonably foreseeable risks of unfair bias in the training data and model outputs where such bias could negatively impact the professional review process.
Testing protocols include checks for consistency across different input variables to maintain the system's reliability and objectivity.
Summary of key legal risks mitigated
- Liability for error: Mitigated by positioning Murrai as advisory and retaining final human accountability.
- Regulatory non-compliance: Mitigated by committing to follow applicable laws and ensuring Murrai is not legally determinative.
- Data privacy: Mitigated by respecting client ownership and limiting data use to the agreed service scope.
