AI’s Google Colored Glasses

Alexander Dante Camuto
12 min readJan 8, 2021

--

On November 30th, Google’s DeepMind made history by way of their AlphaFold Artificial Intelligence (A.I.) algorithm. AlphaFold has essentially solved the “protein folding problem”, which has challenged researchers to determine a protein’s 3D shape directly from its amino acid sequence for the past 50 years.

As proteins are the building blocks of life, and because the way they work is determined by their shape, cracking this problem is a big leap forward in understanding how biological systems operate at molecular scales.

Academics, famous for their stoicism, did not hold back in their praise. Andrei Lupas, an evolutionary biologist at the Max Planck Institute for Developmental Biology, enthused:

This will change medicine. It will change research. It will change bioengineering. It will change everything.

Mohammed AlQuraishi, a computational biologist at Columbia University, gushed:

It’s a breakthrough of the first order, certainly one of the most significant scientific results of my lifetime.

At first glance, this appears to be a moment of celebration for the West. One of its most promising and cutting-edge companies has made a groundbreaking discovery, decades earlier than anyone thought possible. The endeavor was fuelled by impressive technological prowess, and research talent, and one could argue that policies to attract A.I. talent and foster innovation are working as intended. It is difficult to contest that point of view in the face of such results.

Nevertheless, even though AlphaFold is undoubtedly a significant achievement, the subtext is more nuanced. The fact that a tech company like Google is now dominant in producing scientific research is a symptom of a weak governmental approach with respect to A.I.. Though the current situation has delivered valuable results in the short-term, it may prove to be short-sighted and could threaten the West’s dominance in A.I. research in the years to come.

Where is government A.I.?

Big Tech — Amazon, Apple, Google, Facebook, and Microsoft — are spending heavily on A.I.. In 2017, they spent a combined $60 billion on A.I. related research and development (R&D). That same year the U.S. government spent a roughly equivalent amount on all of its non-military R&D.

To attempt to level the playing field, governments are slowly upping their involvement in the space. In November 2017, the UK announced its plan to invest about $1.3 billion in A.I. from both public and private funds over the coming years. France announced a similar plan, with a comparable budget the following year. The U.S. has announced a $2 billion budget this year to allocate to military A.I. research over the coming 5 years. These investments amount to hundreds of millions of dollars per year, a far cry from the tens of billions invested by Big Tech.

A recent report by the Center for New American Security calls for the U.S. to up this funding to $25 billion by 2025 to even hope to remain competitive in the coming decades — a ten-fold increase from current levels of funding that would still only be a fraction of Big Tech’s. Actively pursuing this line of research is widely agreed to be critical for governments as the potential benefits for governance are enormous. By recent estimates, properly deployed A.I. systems could save the UK’s public health system £12.5 billion a year.

Government and private companies are bound to have different priorities. It is therefore critical that governments up their A.I. R&D budgets. As the University of Oxford’s Windfall Clause Report puts it:

A.I. is a rare ‘general-purpose technology’ with the potential to transform nearly every sector of the economy.

Improvements to public transportation, health care, and the machinery of governance are not going to come from companies driven by ad sales, and until governments step up their investment and involvement, the West will be reliant on tech giants like Google for A.I. innovation.

Why does Google care so much about A.I.?

In recent years, after a long “A.I. winter” that entailed close to no funding for academics, Machine Learning (a subset of A.I. methods) is the hot new field in academic research. The frenzy surrounding the field has been growing. With this renewed attention has come money, a lot of money. Some thirty thousand researchers now work in the field. Big Tech companies hire out open bars at academic conferences to lure young academics into their ranks. Professors, once underpaid, are now being offered hybrid industry-academia positions that come with up to seven-figure salaries.

The rebirth of A.I. has come mostly from new advances in neural networks. These algorithms originate as far back as 1943 when researchers attempted to build computational models of the brain. As such, neural networks are vaguely inspired by how neural structures of the brain operate. At their core, they give computers a way to learn by processing data.

One of the main problems these algorithms faced up until the early 2000s, was that they need lots of data and many neurons to perform. Ergo, they require fast and powerful chips to deliver results. Advances in chip technology, combined with the generations of large amounts of data, revived these methods in the early 2000s.

Google and other tech giants were well poised to take advantage of the technology. Their computer clusters were already state of the art, and their business models relied on the collection of large amounts of customer data from which they could sell targeted ads. These companies could then use these new A.I. tools for two purposes. The first was to provide new services that get more ad views. These include services like Youtube’s video recommendations and Facebook’s curation of feeds that keep users browsing longer and longer.

The second application was to improve the conversion rate on the ads they serve. If you search for a washing machine, you are demonstrating intent. However, if you layer this with your recent searches for a mortgage and a moving company, Google can infer that you are probably a new homeowner. As such, you are more likely to purchase the washing machine and ads targeted to you are worth more to an advertiser. Making those sorts of connections on the scale of billions of users is possible with neural networks and has proven to be immensely profitable.

Because of this, Big Tech has an incentive to determine the focus of A.I. research. To that end, these companies have moved aggressively into A.I. R&D. Google, the foremost A.I. player among tech giants, became in their own words an ‘A.I.-first’ company in 2017. Since 2009 the company has acquired 30 A.I. startups, including the London research behemoth Deepmind.

Beyond this aggressive commercial expansion, Google has extended its tentacles into the hallowed halls of academia by funding Ph.D. students across the globe, offering the hybrid industry-academia positions mentioned previously, and inviting Ph.D. students to complete highly paid internships at Google offices.

Why Academia ?

Big Tech needs academia as a talent pool from which to hire highly trained researchers. Currently, demand for this specialized talent greatly outnumbers supply and tech companies are competing for a scarce and valuable resource. To attract talent, companies need to offer potent incentives. They cough up enormous sums for well-established researchers and create an attractive research environment by churning out high-quality papers in the most prestigious publications.

A friend recently completed an internship at Google Brain. Once he had done the heavy lifting of deriving new theoretical results, the task of writing the paper was delegated to the vast Google paper writing apparatus. Months after his internship the paper was accepted at a top-tier conference with his name as the primary author. He had not written a single word in the final draft.

Google has gamed the academic publication process and nowhere is their influence more apparent than at the annual Conference on Neural Information Processing Systems (NeurIPS), the crème de la crème of machine learning publications. In 2020, 1428 papers passed the conference’s grueling review process for publication, of which 180 papers were by “Googlers”. More than a tenth of the foremost ML papers this year were published by Google.

Why is this a bad thing?

Google is paying top tier academics for their talent. It is funding cutting edge research in the space and delivering incredible results, providing money that governments are either incapable or unwilling to provide. In particular, the U.S. government has severely lagged behind China and Europe in A.I. research investments over the past decade. Google and other tech giants have stepped in to fill the gap, keeping the U.S. competitive in A.I. research.

Though the UK has been more targeted than the U.S. in its A.I. initiatives, the research output of these pales in comparison to that of powerhouses like DeepMind. Governments and Universities are unable to attract top talent as the pay they offer is a fraction of that offered by Big Tech research labs. This is a problem that has been brought up repeatedly by those who run these initiatives.

This ‘brain-drain’ from academia and the public sector will have devastating consequences for the West’s dominance in the field. The lack of Ph.D. students staying in academia to become professors is depriving the field of mentors that are critical to teaching young researchers. The issue here is that companies like Google have replaced the role of governments in funding research and have cannibalized much of the research output in the ML space. The problems this suggests are numerous.

A stifling of innovation

The benefits of the inter-relationship of academia and industry have been the subject of debate for decades. When companies have an outsize influence on funding and ascertain what is worthy of research as determined by commercial interests, the thematic diversity of research narrows considerably. This reduction in research diversity has been measured recently for A.I. as an excessive focus on deep-learning (algorithms that use neural networks). Other forms of A.I., such as Bayesian methods, which work for smaller amounts of data and are less commercially viable for tech companies, have fallen to the wayside.

This narrowing has crept up on the field unnoticed because A.I. researchers have conflated academic research and corporate research for years. Corporate research is geared towards developing methods that will be profitable in the near term. In contrast, the aim of academic research is often exploratory, and less concentrated on a single domain. The fact that many academic researchers are under the pay of Big Tech companies means that they are not truly free to pursue exploratory research. As Meredith Whittaker, the faculty director of the A.I. Now Institute puts it:

At any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest.

This blurring of academic and corporate boundaries came to a head when Google fired its lead A.I. Ethics researcher, Timnit Gebru. Her dismissal came in the wake of her criticism of the A.I. methods the company uses. Regardless of who you think was in the right, the situation highlights the fact that tech companies are not academic institutions and will censor research that can harm their bottom line. Tech companies use their economic best interest to guide their research.

Innovation is often driven by exploratory academic research. It has taken 60 years for neural networks to mature from their conception. There were multiple decades in which these models were not profitable commercially, and their development was driven solely by academic curiosity. The next big thing often comes from unexpected places, and chasing methods that directly benefit companies like Google will make it more and more difficult for the West to innovate.

To make matters worse, A.I. research at Big Tech companies has led to a fixation on the development of algorithms that can only run with giant data and computing centers which are most beneficial commercially for companies driven by ad sales. The most egregious example of this is GPT-3, a language model developed by OpenAI, a for-profit artificial intelligence lab, that can churn out long-form text with impressive fluency. To train it, the company paid Microsoft to build a custom supercomputer. Training the model on standard computers would cost roughly $4.6 million; more than most academic A.I. institutes’ annual budget. To develop these models researchers need to join companies with unlimited budgets. This further drains top talent away from academia and gives companies even more power to narrow the focus of A.I. research.

A stifling of competition

Google loves to acqui-hire — to crush the competition and bootstrap its dominance in up-and-coming technologies. As noted before, the company has acquired more A.I. startups than any other tech giant and I would postulate that there has not yet been a “Google of A.I.” because Google is the Google of A.I..

These practices, combined with the direct funnel that has been created from academia to industry have arguably weakened the A.I. startup space. Many Ph.D. students graduate and immediately begin working for one of the large tech firms. The big wave of A.I. startups that venture capitalists talked about for years has not arrived and the big players in the space are still the incumbent tech giants from the early 2000s.

Because these tech giants are the only significant players in the space, there is a lack of diversity in the mission of A.I. companies. Google wants to sell better-targeted ads, as does Facebook, and Amazon wants to better target recommended products so consumers spend more. Though projects like Alpha-Fold make big splashy headlines and are great for PR-washing, the vast majority of A.I. innovations have been concentrated in E-commerce and targeted advertising.

These sectors, though important for the West’s economy, are not broadly beneficial. The aims of a company are not aligned with the loftier goals of Western governments in governance, public transportation, health care, and other spheres of public interest.

Companies are not governments

Even when big tech companies collaborate closely with Western governments, the alliance can be strategically disastrous. Let’s take the example of the U.S. Department of Defense’s reliance on Google and other tech firms for the development of military ML tools. They acknowledge this dependence in a 2018 report:

We recognize that strong partners are crucial at every stage in the technology pipeline, from research to deployment and sustainment. Today, the U.S. private sector and academic institutions are at the forefront of modern A.I. advances. To ensure continued prosperity and the capacity to align their A.I. leadership with critical defense challenges, we are committed to strengthening the private sector and academia while bridging the divide between non-traditional centers of innovation, such as the A.I. startup community, and the defense mission.

Google has been helping the U.S. military develop autonomous weapon systems and more accurate targeting software. Though the company’s employees protested forcefully enough that the company withdrew from one of its largest partnerships with the Pentagon, Project Maven, it appears that Google is keen to aid in the U.S. military’s A.I. deployment. Yet, when Project Maven ended, Google started an A.I. lab in China, where Peter Thiel points out there’s little doubt that the research in this lab will be used for military purposes:

All one need do is glance at the Communist Party of China’s constitution: Xi Jinping added the principle of civil-military fusion, which mandates that all research done in China be shared with the People’s Liberation Army, in 2017.

Because their incentives are mainly economic, tech firms are liable to play both sides of the field: offering technology and services for anyone willing to pay. Anything they develop is likely to end in the hands of military and strategic adversaries.

Reinvigorating A.I. research

The problem here is a delicate one. On the one-hand private industry is providing monumental resources behind the development of new deep-learning algorithms. The issue is that this centralization and concentration of both financial resources and infrastructure for A.I. means that these algorithms are mainly deployed for squeezing additional profits out of ad sales.

Breaking up tech-companies is probably not the answer, as the immediate consequence would be to destroy the West’s main hubs of innovation. Some of the outputs of these companies, like AlphaFold, though they only constitute a minority of Big Tech’s R&D output, can still have a positive impact.

Governments on their own are unlikely to be able to compete with the technical capabilities and budgets of trillion-dollar companies. However, they can create an economic environment that fosters greater competition by limiting the ability of large tech firms to purchase and then close down competitors. Distributing A.I. capabilities across more companies and institutions would logically reduce the myopic focus of the industry and encourage greater entrepreneurship in spaces other than E-commerce and advertising.

Western governments also need a clearer vision for their A.I. programs. The U.S.’s recent executive orders that make A.I. an R&D priority are a step in the right direction, but these mandates are laughably vague and the sums involved are negligible ($140 million over the next 5 years). Compare this with China which has made rapid advances, companies in lockstep with the government, driven by its goal to become the world leader in A.I. by 2030. The U.S. needs a similar vision to motivate and inspire its A.I. researchers to work towards goals that are not dictated by commercial interests. Given the potential ramifications and benefits for governance of properly deployed A.I. systems, as cliche as it is, the U.S. needs a program of the same vision and magnitude as that of the Apollo program that concluded nearly 50 years ago. One can hardly blame Google and others for imposing their research agenda when governments have failed to lay out an inspiring vision as an alternative.

Without decisive steps that spread the concentration of A.I. innovation and that provide a clear strategic vision, the West is likely to lose its advantage in the development of new A.I. technologies. In the long run, what looks like a strength today may prove to be a handicap tomorrow.

--

--

Alexander Dante Camuto
Alexander Dante Camuto

Written by Alexander Dante Camuto

Researcher in Probabilistic Machine Learning

Responses (1)