Skip to main content
NotAI Home
  • How It Works
  • Pricing
  • Docs
  • Support
Get Started

AI Transparency Statement

Last Updated: April 3, 2026

Introduction

This AI Transparency Statement is published by IS NOT AI LLC ("NotAI") to consolidate the disclosures required under jurisdiction-specific AI regulation, including the Colorado Artificial Intelligence Act (SB 24-205, codified at C.R.S. § 6-1-1701 et seq.), the European Union Artificial Intelligence Act (Regulation (EU) 2024/1689), and comparable laws identified in the jurisdiction table at the end of this page. It supplements, and does not replace, the NotAI Privacy Policy, the NotAI Terms of Service, and the Parents' Bill of Rights.

Capitalized terms used but not defined in this statement have the meanings given in the NotAI Privacy Policy or in the applicable regulation cited.

1. The AI System and Its Intended Purpose

System name. NotAI Authorship Verification and Agent Detection Services (the "Services").

Intended purpose. The Services analyze behavioral signals collected during text composition (keystroke-timing patterns, cursor-movement dynamics, scroll and click telemetry, paste events, focus and blur transitions, and related session metadata) to produce a confidence score indicating the likelihood that the observed composition was produced by a human author using a keyboard, as opposed to being pasted from, generated by, or orchestrated by an automated agent such as a large language model, browser-automation framework, or remote-control tool. The Services are designed to inform authorship review, academic-integrity workflows, and fraud-and-abuse decisions made by a human reviewer within the deploying institution.

Modality. The Services operate on behavioral session data. The Services do not perform content-based detection of AI-generated text (the Services do not analyze the semantic content of a submission), do not perform facial recognition, and do not analyze audio, video, or webcam imagery.

Output. For each analyzed session, the Services return a numeric confidence score, a set of component sub-scores corresponding to individual detection techniques, and, where available, contextual metadata supporting reviewer interpretation (for example, indicators of paste events, automation framework signatures, or anomalous typing cadence). Outputs are designed for interpretation by a trained human reviewer and are not intended to be used as a sole automated basis for a consequential decision.

2. Roles, Developer and Deployer

Developer. NotAI is the developer of the Services within the meaning of C.R.S. § 6-1-1701(7) (Colorado AI Act) and the provider within the meaning of Article 3(3) of the EU AI Act. In this capacity NotAI designs, trains, validates, maintains, and places on the market the Services.

Deployer. An institution, instructor, administrator, or other customer that puts the Services into use in its own name or for its own users is the deployer within the meaning of C.R.S. § 6-1-1701(6) and the deployer within the meaning of Article 3(4) of the EU AI Act. The deployer is responsible for the decisions it makes using the Services, including the decision to rely on a NotAI confidence score when evaluating a submission.

Substantial modification. A deployer that substantially modifies the Services, rebrands them, or combines them with other systems in a way that creates a different high-risk AI system within the meaning of the applicable regulation may itself become a developer or provider under C.R.S. § 6-1-1703(5) or Article 25 of the EU AI Act, with the corresponding obligations.

3. High-Risk Classification and Consequential Decisions

Colorado AI Act. The Services, when used by an educational institution or an employer to make, or to be a substantial factor in making, a consequential decision as defined in C.R.S. § 6-1-1701(3) (including decisions concerning educational enrollment or an educational opportunity, or a decision regarding employment or an employment opportunity), are treated by NotAI as a high-risk artificial intelligence system within the meaning of C.R.S. § 6-1-1701(9). In those deployments NotAI complies with the developer obligations in C.R.S. § 6-1-1702 through § 6-1-1703, and deployers are responsible for their obligations under C.R.S. § 6-1-1704.

EU AI Act. When used by an educational or vocational-training deployer for any of the high-risk uses set out in Annex III, point 3 of the EU AI Act (in particular, to evaluate learning outcomes (point 3(b)), to assess the appropriate level of education that an individual will receive or will be able to access (point 3(c)), or to monitor and detect prohibited behaviour of students during tests (point 3(d)), for example in connection with authorship-verification of coursework or examinations), the Services are treated by NotAI as a high-risk AI system under the EU AI Act. In those deployments NotAI complies with the Chapter III, Section 2 requirements for high-risk AI systems, including a risk-management system, data governance, technical documentation, record-keeping, transparency and provision of information to deployers, human oversight, accuracy/robustness/cybersecurity measures, and the conformity-assessment and post-market monitoring duties. When used solely for use cases outside Annex III, the high-risk obligations do not apply; the general-purpose transparency duties in Section 4 still do.

4. Transparency to Deployers and to Affected Individuals

4.1 Developer disclosures to deployers

In accordance with C.R.S. § 6-1-1702 and Article 13 of the EU AI Act, NotAI supplies each deployer with documentation that describes:

  • The intended uses, and known harmful or inappropriate uses, of the Services (see Section 5);
  • A high-level summary of the type of data used to train the Services (see Section 6);
  • Reasonably foreseeable limitations of the Services (see Section 7) and measures a deployer should take to mitigate those limitations, including human-oversight and review requirements;
  • The purpose of the Services and the benefits that the Services are intended to deliver;
  • How the Services should be evaluated for performance and monitored for discrimination, including the evaluation metrics used by NotAI in pre-deployment testing;
  • The data-governance measures applied to training, validation, and testing data sets, including measures to examine for possible biases;
  • Instructions for use, including human-oversight measures, expected accuracy and robustness, characteristics of input data, and the computational and hardware resources required, in the level of detail required by Article 13 of the EU AI Act.

4.2 User-facing notification (interaction with an AI system)

Where the Services operate in a manner that an affected individual would not otherwise reasonably expect, NotAI provides, or supports the deployer in providing, a clear and distinguishable notice to the affected individual, no later than the first interaction, that states that behavioral signals are being collected and analyzed by an automated system for authorship verification. This notice satisfies the deployer-consumer notification obligation in C.R.S. § 6-1-1704(4) and the deployer-user notification obligation in Article 50(1) of the EU AI Act.

4.3 Marking of automated output

As a matter of NotAI policy and consistent with the spirit of Articles 50(2) and 50(4) of the EU AI Act (which address the marking of synthetic and manipulated content generated by AI systems), where the Services produce output that informs a decision about a natural person, NotAI marks that output as machine-generated. NotAI does not currently generate synthetic image, audio, video, or deepfake content within the meaning of Articles 50(2) or 50(4); the marking practice in this Section 4.3 is a voluntary clarity measure rather than a strict statutory marking obligation under those provisions.

4.4 Deployer-to-consumer notice of adverse consequential decision

When a deployer uses the Services as the basis for, or a substantial factor in, a consequential decision that is adverse to a consumer, the deployer is required under C.R.S. § 6-1-1704(3) to provide the consumer with (a) a statement of the principal reason or reasons for the decision, (b) the degree to which the Services contributed, (c) the type of data that was processed in making the decision, (d) the source or sources of that data, (e) an opportunity to correct any incorrect personal data that the Services processed, and (f) an opportunity to appeal to a human reviewer. NotAI supports the deployer in meeting these obligations by making available a per-session explanation of component sub-scores, a description of the input-data categories used, and a reviewer interface that records the identity of the human reviewer who approves or overrides an automated output.

4.5 Right to appeal, EU AI Act Article 26(11)

Where the Services are used as a high-risk AI system under the EU AI Act, deployers are responsible for informing natural persons who are subject to decisions that are made or assisted by the Services, in accordance with Article 26(11). NotAI's Data Processing Agreement and documentation package are designed to enable deployers to meet this obligation.

4.6 Biometric categorisation notice (Article 50(3))

To the extent that NotAI's behavioral-signal analysis is treated as a biometric categorisation system within the meaning of Articles 3(40) and 50(3) of the EU AI Act, deployers are required under Article 50(3) to inform the natural persons exposed to the system of its operation and to process any personal data in accordance with Regulations (EU) 2016/679 (GDPR) and (EU) 2018/1725 and Directive (EU) 2016/680. NotAI provides, or supports the deployer in providing, the corresponding notice in a clear and distinguishable manner, no later than the first interaction. This notice is in addition to, and does not substitute for, the user-facing notification described in Section 4.2.

5. Intended Uses and Prohibited Uses

5.1 Intended uses

The Services are intended for use by educational institutions, academic-integrity programs, employers evaluating work product, and fraud-and-abuse teams to assist a human reviewer in distinguishing between human-authored and machine-authored or machine-assisted text composition. Intended deployments include:

  • Authorship verification of student coursework, exam responses, and admissions essays;
  • Bot-and-agent detection in self-paced assessment, hiring-assessment, and certification environments;
  • Fraud detection in financial, healthcare, and legal workflows where the origin of a submission matters; and
  • Research, benchmarking, and academic study of the Services themselves, provided the researcher complies with the Terms of Service.

5.2 Prohibited and restricted uses

The prohibited and restricted uses of the Services are set out in Section 4 of the NotAI Terms of Service. Those prohibitions bind every customer as a matter of contract and are not authorized regardless of any contrary customer instruction. NotAI will not knowingly support a use of the Services that falls within any of the categories prohibited by Section 4.2 of the Terms of Service, and will enforce the conditional restrictions in Section 4.4 (employment, K–12 and under-13, and credit / insurance / housing) and the mandatory human-review requirement in Section 4.3.

6. Training Data Summary

In accordance with C.R.S. § 6-1-1702(2)(a)(III) and Article 13(3)(b)(v) of the EU AI Act, NotAI provides the following high-level summary of the type of data used to develop and train the Services. This summary does not disclose trade secrets or individually identifiable information.

  • Behavioral session data from consenting human contributors. Keystroke-timing sequences, cursor-movement traces, scroll and click telemetry, and related session metadata collected from volunteers who provided informed consent to contribute their behavioral data to NotAI's training corpus. Contributors were recruited with the goal of broad representation across input devices, operating systems, and language backgrounds; NotAI's data-governance program is iteratively expanding the formal documentation of contributor attributes to support more granular disaggregated evaluation as the program matures.
  • Synthetic behavioral traces produced by automated agents. Traces generated by large language models used as chat interfaces, by browser-automation frameworks (including Playwright, Puppeteer, and Selenium), by remote-control tools, and by paste-from-clipboard flows. These traces provide negative-class examples representative of the automation modalities the Services are designed to detect.
  • Adversarial traces. Traces produced by contributors asked to evade detection using publicly documented techniques, used to harden the Services against evasion.
  • Derived and aggregated statistical patterns. Aggregated features extracted from the foregoing inputs that are not attributable to any identifiable individual.

The training corpus does not include the semantic content of a submission, does not include facial images or voice recordings, and does not include behavioral signals collected from a deployed customer's users for the purpose of training, except to the limited extent that an Enterprise customer opts in under a separate written agreement and only in de-identified and aggregated form. See Privacy Policy Section 3 for the service-improvement disclosures.

7. Limitations and Risks

7.1 Known limitations

  • Probabilistic output. The Services return a confidence score, not a deterministic classification. The Services can produce false positives (human-authored work flagged as machine-authored) and false negatives (machine-authored or machine-assisted work not flagged). No confidence-score threshold eliminates either error class.
  • Distribution shift. Performance may degrade in deployment contexts that differ materially from the training distribution, including unusual input devices (non-standard keyboards, on-screen keyboards, speech-to-text inputs), assistive technologies used by individuals with motor or cognitive disabilities, and languages or writing systems underrepresented in the training corpus.
  • Accessibility considerations. Individuals who use speech-to-text, switch-input, eye-tracking, dictation, or other assistive technologies will produce behavioral signals that differ from keyboard-typing patterns and may be flagged by a naive reviewer as atypical. Deployers must configure accommodations and human-reviewer guidance to prevent discrimination on the basis of disability.
  • Paste legitimacy. A paste event is a strong indicator that the pasted text did not originate as keystrokes in the current session, but does not by itself establish that the pasted text was machine-generated. Legitimate pastes (quoting a source, moving text between applications) occur frequently in human-authored workflows.
  • Evasion. Sophisticated adversaries may use techniques specifically designed to mimic human-typing patterns. NotAI maintains an ongoing adversarial-robustness program but does not claim the Services are impossible to evade.
  • Scope of coverage. The Services detect behavioral indicators of automation during composition. They do not detect post-composition editing by a human of machine-generated text ("hybrid authorship"), and they do not analyze semantic content. Deployers that require hybrid-authorship detection should combine the Services with complementary tools.

7.2 Risks of algorithmic discrimination

In accordance with C.R.S. § 6-1-1702(1), NotAI uses reasonable care to avoid algorithmic discrimination as defined in C.R.S. § 6-1-1701(1). NotAI's risk-management program includes: pre-deployment evaluation on validation data spanning a range of input devices, operating systems, and language backgrounds, with the corpus broadening as the program matures; performance examination disaggregated across input-device class, operating system, language background, and assistive-technology use where such attributes are known or inferable, with the granularity of disaggregation evolving as documentation of contributor and deployment attributes matures; continuing post-market monitoring of deployment-level false-positive rates; and an impact-assessment template made available to deployers under C.R.S. § 6-1-1704(3) to support the deployer's own impact-assessment obligations. NotAI will disclose any known or reasonably foreseeable risk of algorithmic discrimination to the Colorado Attorney General and to deployers as required by C.R.S. § 6-1-1702(5) and C.R.S. § 6-1-1703.

8. Risk Management, Governance, and Post-Market Monitoring

NotAI maintains a written risk-management program that covers the development, deployment, and post-market monitoring of the Services. The program is designed to satisfy the developer risk-management obligations in C.R.S. § 6-1-1702(2)(a)(I), the post-market monitoring obligations in C.R.S. § 6-1-1702(3), and the high-risk AI-system risk-management, data-governance, technical-documentation, record-keeping, transparency, human-oversight, and accuracy/robustness/cybersecurity obligations in Articles 9, 10, 11, 12, 13, 14, and 15 of the EU AI Act. The program is reviewed at least annually and whenever a material change is made to the Services.

The program includes:

  • A documented risk-assessment methodology that identifies foreseeable risks of algorithmic discrimination, error-rate disparity, safety harm, and misuse, and that assigns mitigations to each;
  • A data-governance protocol covering training-data provenance, consent, de-identification, bias examination, quality assurance, and retention;
  • Technical documentation sufficient to demonstrate the Services' compliance with applicable requirements, made available to regulators on request;
  • Automated and manual logging of Services' operation, retained for the period required by law and the Services' information-security program;
  • Pre-deployment and ongoing human-in-the-loop oversight measures;
  • A pre-deployment testing protocol covering accuracy, robustness, and cybersecurity, and ongoing post-market monitoring of the same;
  • An incident-response protocol that produces a notification to the deployer and, where required, to the regulator within the applicable statutory window.

9. Incident and Regulator Notifications

9.1 Colorado Attorney General, algorithmic-discrimination disclosure

If NotAI discovers that the Services have caused or are reasonably likely to have caused algorithmic discrimination, NotAI will notify the Colorado Attorney General and, where applicable, known deployers, without unreasonable delay and no later than ninety (90) days after discovery, in accordance with C.R.S. § 6-1-1702(5). NotAI maintains the notification channel at [email protected] for consumers, deployers, and regulators to report suspected algorithmic discrimination, and evaluates each report within the statutory period.

9.2 EU serious-incident reporting (Article 73)

Where the Services are placed on the EU market as a high-risk AI system, NotAI reports serious incidents to the market-surveillance authority of the Member State where the incident occurred within the timeframes required by Article 73 of the EU AI Act (generally fifteen (15) days, with accelerated timeframes for incidents affecting critical infrastructure or involving the death of a person).

9.3 Deployer incident notification

NotAI will notify affected deployers of a known material defect, security incident, or algorithmic-discrimination issue in the Services without undue delay, and will provide reasonable cooperation with deployers in their own notification obligations to consumers, data-protection authorities, and attorneys general.

10. How Individuals Exercise Their Rights

If you are a consumer who has been subject to a consequential decision made with assistance from the Services and you wish to exercise a right afforded to you by the applicable AI-regulation regime (for example, to appeal the decision to a human reviewer, to correct personal data processed by the Services, or to obtain a statement of the principal reasons for the decision), your first point of contact is the deployer that made the decision. The deployer is the controller of the decision and is responsible for its deployer-to-consumer notice obligations under C.R.S. § 6-1-1704(3) and Article 26(11) of the EU AI Act.

If you cannot identify the deployer, or if you believe the deployer has not responded to your request, you may contact NotAI at [email protected]. NotAI will, within a reasonable period and no later than the period required by the applicable law, either (a) forward your request to the deployer and notify you of the deployer's identity, or (b) respond directly where NotAI itself is the controller of the decision.

For privacy-specific rights (access, deletion, correction, opt-out, portability, limit-SPI) see the NotAI Privacy Policy, Sections 7 through 10. For student-specific rights under New York Education Law § 2-d and 8 NYCRR Part 121, see the Parents' Bill of Rights.

11. Jurisdiction Reference Table

The following table identifies the AI-regulation regimes in which NotAI's disclosures above are provided, the effective date of each regime's operative provisions, and the principal authority to which NotAI reports or responds. Additional jurisdictions are added as regulation enters into force.

Jurisdiction Regulation Operative date Principal authority
Colorado Colorado Artificial Intelligence Act, SB 24-205 (C.R.S. § 6-1-1701 et seq.), as amended by SB25B-004 June 30, 2026 Colorado Attorney General
European Union EU AI Act, Regulation (EU) 2024/1689 Prohibited-practices obligations: February 2, 2025. General-purpose AI-model obligations: August 2, 2025. High-risk AI-system obligations (Annex III): August 2, 2026. High-risk AI-system obligations under Article 6(1) (AI as safety components of, or products covered by, Annex I Union harmonisation legislation): August 2, 2027. Member-State market-surveillance authorities; European AI Office
New York City Automated Employment Decision Tools law (NYC Local Law 144; RCNY § 5-300 et seq.) In force (July 5, 2023) NYC Department of Consumer and Worker Protection
Illinois Artificial Intelligence Video Interview Act (820 ILCS 42); HB 3773 (amendments to Illinois Human Rights Act) HB 3773 effective January 1, 2026 Illinois Department of Human Rights
California CCPA/CPRA automated-decisionmaking regulations; SB 942 (California AI Transparency Act) SB 942 effective January 1, 2026; CPPA regulations on automated decisionmaking, risk assessments, and cybersecurity audits approved by the Office of Administrative Law September 22, 2025 and effective January 1, 2026, with phased compliance (risk-assessment obligations from January 1, 2026; ADMT compliance from January 1, 2027; cybersecurity-audit certifications phased through 2028–2030 by revenue band) California Privacy Protection Agency; California Attorney General
Utah Utah Artificial Intelligence Policy Act (SB 149; Utah Code Title 13, Chapter 72), as amended by SB 226 (2025) and SB 332 (2025), with related disclosure rules for AI mental-health chatbots in Utah Code Chapter 72a (HB 452, 2025) Original act in force May 1, 2024; 2025 amendments effective May 7, 2025 Utah Division of Consumer Protection; Office of Artificial Intelligence Policy

The absence of a jurisdiction from this table does not imply that the Services are unregulated in that jurisdiction. Deployers outside the jurisdictions listed above remain responsible for assessing the Services against local law.

12. Changes to This Statement

NotAI may update this AI Transparency Statement to reflect changes in the Services, in applicable law, or in NotAI's risk-management program. Material changes will be notified on this page with a revised "Last Updated" date, and, where the changes affect a deployer's obligations, to the deployer by email or through the NotAI deployer portal.

13. Contact

Questions about this AI Transparency Statement, requests for the deployer documentation package, or reports of suspected algorithmic discrimination may be directed to:

  • AI Transparency: [email protected]
  • Privacy: [email protected]
  • Mail: IS NOT AI LLC, Attn: AI Transparency, 7014 E Camelback Rd B100A, Scottsdale, AZ 85251
  • EU Representative (AI Act Article 22): same as the EU Representative identified in the NotAI Privacy Policy, Section 16

Mandate of the EU Representative. The EU Representative has been mandated in writing by NotAI under Article 22 of Regulation (EU) 2024/1689 (the "EU AI Act") and may be addressed, in addition to or instead of NotAI, by national competent authorities, market-surveillance authorities, and affected persons on all issues related to NotAI's compliance with the EU AI Act in relation to the Services.

See also the NotAI Privacy Policy, the NotAI Terms of Service, and the Parents' Bill of Rights.

NotAI

Verify Humanity in the AI Era

© 2026 NotAI. All rights reserved.

How It Works Pricing Docs Support Privacy Terms