The embedded systems engineer who uses a large language model (LLM) to generate firmware documentation, the product team that uses an image diffusion model to generate UI assets, the autonomous driving company that trains a neural network on proprietary sensor data — all of them face the same unresolved legal question: who, if anyone, owns what the artificial intelligence (AI) produced?

This is not an academic question. It sits at the center of intellectual property (IP) strategy for any technology company that uses AI in its development pipeline, and the answer has material consequences for patent strategy, licensing agreements, investor due diligence, and competitive moats.

The Constitutional Foundation: Authorship Requires a Human

U.S. copyright law is grounded in the United States Constitution's Intellectual Property Clause (Article I, Section 8, Clause 8), which authorizes Congress to secure exclusive rights to "Authors and Inventors." The Copyright Act of 1976 (17 U.S.C. § 101 et seq.), as consistently interpreted by the United States Copyright Office (Copyright Office) and the federal courts, requires human authorship as a prerequisite for copyright protection. A work created entirely by a machine, without creative input from a human author, is not protectable.

This principle, long established in cases involving photographs and computer-generated works, was reaffirmed by the Copyright Office in a series of guidance documents issued between 2023 and 2025 following the explosion of generative AI tools. The Office's position is clear: it will not register claims to copyright in works produced by machines operating autonomously, even highly sophisticated ones.

What the Office has not done — because no court has forced it to — is draw a precise legal line between "machine-generated" and "human-authored with AI assistance." That line is the entire battleground.

The Spectrum: From Tool to Author

It is useful to think of AI involvement in creative work as a spectrum. At one end, a human engineer uses a code completion tool to autocomplete a function body — the human makes every meaningful creative choice, and the tool provides execution efficiency. This is analogous to using a spell-checker or a compiler. No one seriously argues the compiler co-authors the resulting software.

At the other end, a human types a single prompt — "write a complete firmware driver for an I2C sensor in C" — and receives a fully formed, 400-line source file. The human made one creative choice (what to ask for) and exercised no judgment over any of the technical decisions in the output. The Copyright Office's current position suggests this output is not protectable.

In the middle — which is where most real engineering and product development lives — a human provides detailed specifications, iterates through multiple generations, selects among alternatives, modifies outputs, integrates them with other work, and exercises substantial creative judgment throughout. The Copyright Office has registered works in this category, treating the human's creative choices as the copyrightable expression, even where AI was the mechanism of execution.

What This Means for Technology Product Teams

For an embedded systems company or software product team, the practical implications break into three areas:

Source code and firmware

AI-generated source code occupies the most legally uncertain ground, because software copyright has always been a relatively thin protection in any case. The functional elements of software — the algorithms, the data structures, the system architecture — are generally not protectable by copyright regardless of how they were created. Copyright in software protects the specific expression: the particular way a programmer chose to write a function, not the function itself.

If an AI generates source code, and the human contribution is limited to the prompt, the resulting code may not be copyrightable. This matters most when you are asserting copyright against a competitor who copies your codebase. It matters less for day-to-day development, where your real protection may come from trade secrets, not copyright.

Training data and model weights

Companies that train proprietary AI models on their own engineering data — sensor data, simulation outputs, test results, labeled safety-critical scenarios — face a different set of questions. The training data itself may be copyrightable (if it is sufficiently original). The model weights that result from training are the subject of active litigation and no clear legal consensus. The outputs of the trained model are subject to the same human-authorship analysis described above.

Trade secret law may be a stronger protection for proprietary training datasets and model weights than copyright, particularly for embedded systems companies where the value is in specialized technical knowledge rather than creative expression.

Patent strategy

The United States Patent Act (35 U.S.C. § 1 et seq.) requires that inventors be natural persons. The United States Patent and Trademark Office (USPTO) will not grant a patent listing an artificial intelligence system as an inventor. An AI system cannot be named as an inventor on a United States patent. This was definitively settled in Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022), which held that "inventors" under the United States Patent Act (35 U.S.C. § 1 et seq.) must be natural persons, and has not been disturbed by subsequent legislation or court decisions.

The practical implication: if an AI system generates a novel technical solution — say, a new approach to Automotive Open System Architecture (AUTOSAR) stack optimization, or a novel sensor fusion algorithm — and a human engineer implements and refines it, the patent application must name the human as the inventor, and the human's creative contribution to the inventive concept must be genuine, not merely the act of running the AI tool.

guibert.law Insight

The companies best positioned legally are those that document human creative involvement throughout their AI-assisted development process. Treat your AI tooling like any other engineering tool: maintain design records, document the human decisions that shaped the output, and build your IP strategy around what your engineers actually contributed — not just what the model generated.

The Road Ahead

Congress has held multiple hearings on AI and copyright but has not yet enacted legislation. The Copyright Office is expected to issue additional guidance in 2026. Several district court cases involving AI-generated content are working their way through the system and may produce circuit-level precedent within the next two years.

For technology companies building products today, the prudent approach is not to wait for the law to settle. It is to build IP practices that do not depend on the assumption that AI outputs are automatically protectable, and to ensure that your human engineers are genuinely driving the creative and inventive choices that your IP portfolio is built on.


Attorney advertising. The information in this post is provided for general informational purposes and does not constitute legal advice. Prior results do not guarantee a similar outcome. © 2026 guibert.law