Dear Mr. President: A Note on AI Policy

Dear Mr. President: A Note on AI Policy


Dear Mr. President,

AI is often described as the economic engine of the future. But to realize that growth, we must think beyond AI to the whole system of data, and the rules and context that surround it: our data infrastructure (DI). Our DI supports not only our AI technology, but also our technical leadership more generally; it underpins COVID reporting, airline ticket bookings, social networking, and most if not all activity on the internet. From the unsuccessful launch of, to the recent failure of Haven, to the months-long hack into hundreds of government databases, we have seen the consequences DI can have. More data does not lead to better outcomes; improved DI does. 

Fortunately, we have the technology and foresight to prevent future disasters, if we act now. Because AI is fundamentally limited by the data that feeds it, to win the AI race, we must build the best DI. Your administration can play a helpful role here, by defining standards and funding research into data technologies. Better DI will speed responsiveness to future crises (consider COVID data delays) and establish global technology leadership via standards and commerce. Better DI will ensure that anomalies, like ones that would have helped us identify the Russia hack much sooner, will be evident, so we can prevent future malfeasance by foreign actors.

To build on what we accomplished in the Obama Administration, here are some recommendations that your team may initiate in your first ninety days: 

  1. Prioritize Data Interoperability. In 2016, the Department of Commerce (DOC) discovered that on average, it took six months to onboard new suppliers to a midsize trucking company—because of issues with data interoperability. The entire American economy would benefit from encouraging more companies to establish semantic standards, internally and between companies, so that data can speak to other data.  According to a DOC report in early 2020, the technology now exists for mismatched data to communicate more easily and data integrity to be guaranteed, thanks to a new area of math called Applied Category Theory (ACT). This should be made widely available.

  2. Enforce Data Provenance: As data is transformed across platforms—including trendy cloud migrations—its lineage often gets lost. A decision denying your small business loan can and should be traceable back to the precise data the loan officer had at that time. There are traceability laws on the books, but they have been rarely enforced because up until now, the technology hasn’t been available to comply. That’s no longer an excuse. The fidelity of data and the models on top of them should be proven—down to the level of math—to have maintained integrity.


  3. Formalize Our Future: When we built 20th century assembly lines, we established in advance where and how screws would be made; we did not ask the village blacksmith to fashion custom screws for every home repair. With AI, once we know what we want to have automated (and there are good reasons to not to automate everything!), we should then define in advance how we want it to behave. As you read this, 18 million programmers are already formalizing rules across every aspect of technology. As an automated car approaches a crosswalk, should it slow down every time, or only if it senses a pedestrian? Questions like this one—across the whole economy—are best answered in a uniform way across manufacturers, based on standardized, formal, and socially accepted definitions of risk.

Along with the DI improvements above, any broader AI policies will benefit from these qualities:  

  1. Automate Smaller: There is a danger to the idea that anything that can be automated fully should be automated fully. We naturally compartmentalize physical assembly lines: in a car factory, there is a separate mechanism for chassis construction versus exterior painting. That’s a helpful framework for digital automation. Instead of linking different, disparate components into ever-larger automated systems, let’s keep humans involved at every step, which will increase both safety and trust in the system.


  2. Ensure Human Backstops: In addition to human involvement when the system is working, we need to require human circuit-breakers if it fails. AI is not magic, and errors are inevitable. Sometimes these result in catastrophic failure— a Boeing 737Max falling from the sky, or a massive pharmaceutical trial faltering. Automation will speed us up; at the same time, we must slow ourselves down. We must carefully define the outcome we expect; then we should bake in circuit breakers at every level. If we have an outcome outside the parameters we have chosen, a switch flips and demands human oversight.


  3. Encourage project-based education: Contrary to popular belief, we do not need millions more computer programmers. What we need are millions more technologically astute individuals to work alongside programmers. Deserving programs for children include First Robotics, which encourages everyone to find a comfortable space within a development project. Other wonderful organizations such as Girls Who Code and Black Girls Code can broaden their messaging beyond coding itself to other careers in these new industries. If we expand the mission of groups like these, we can encourage today’s kids to become more comfortable active participants in a digital world. We must diversify every seat around the table, including the ones deciding what values we want to program into our world.

President Biden, your administration will oversee a massive digitalization of our economy. By the end of your first term, 99% of all data ever will have been created since you took office. The strength of the infrastructure that automates this data will support the foundation of our economy for the next generation. Focused investment can materially improve the flexibility of our companies to respond to surprises, along with the quality of available jobs and our trust in digital systems. With these foundational recommendations, along with your broad and active leadership, we can avoid the dystopian vision of AI that so many fear. Instead, we can look forward to a digital future in which we trust.


Eric Daimler served as an authority on AI and contributor to issues of AI policy during the Obama administration. He is the CEO of the MIT-spinout and holds a PhD in Computer Science from Carnegie Mellon University.



Share This