In the last decade, the landscape of Artificial Intelligence (AI) has remarkably transformed from a niche area of research into a pervasive force that shapes our daily lives. Origins in simple rule-based systems quickly underwent a seismic shift with the introduction of machine learning and deep learning techniques. This transformation, which tech giants such as Google, Facebook, Microsoft, and Amazon embraced, significantly advanced AI innovation to unprecedented levels. Unsurprisingly, in their footsteps, a great number of startups contribute to the dynamic expansion of this transformative technology.
Recent surges in AI have forced governments around the globe to bring up regulations and future approaches to AI within international conversation. The fierce debates within this space continue to address the pros and cons of regulating what used to be thought as a radical, far-fetched idea to society – or even bring assertions of prohibiting certain types of AI in the first place. However, with an up-and-coming industry that is expected to grow to 15.7 trillion dollars by 2030, it’s becoming apparent that it shouldn’t be a top concern to worry about how or when AI is going to be regulated, but rather by whom.
Nobody can deny the fact that AI is becoming more integrated into our lives every day. AI technologies are beginning to permeate various aspects of our daily routines like never before through mediums of virtual assistants like Siri and Alexa to recommendation algorithms on streaming platforms and personalized advertisements on social media.
As of 2022, 77% of businesses have entered the realm of AI — whether they have AI aspects already integrated or whether they are looking into the possibility. This is a substantial increase from the 50% of businesses four years ago, in 2020. Additionally, from present day to 2030, the AI market is expected to continue to grow at a compound annual growth rate of 38.1%.
Outside the scope of businesses, it goes without a doubt that AI also leaves a fair impression on consumers as well.
Since its launch in November 2022, OpenAI’s ChatGPT has emerged as a groundbreaking generative chatbot, engaging in conversations that closely mimic human interaction. Within the initial five days of its launch, ChatGPT surpassed the one million user mark, a milestone that took commonly used online services like Netflix and Twitter several years to achieve. The new state-of-the-art self-improving language model that ChatGPT presented created a surge in the market of generative AI from both copycats as well as companies trying to adapt a similar approach.
In the current state of our society, world leaders are the referees in the contentious field where these tech companies play. The breakneck pace of AI development forces regulators worldwide to grapple with the challenge of constructing effective frameworks to govern its usage.
In October 2023, the United States’ President, Joe Biden, stood behind an Executive Order to send a message that America would lead the way in managing the risks of AI. In essence, the newly implemented policy indicated that leading AI companies would have to be more transparent with the U.S. government regarding safety test results and other “critical information.”
While it’s always in good taste to understand how to move ahead, many experts believe it’s simply not enough. The points that were drawn to attention within the order were deemed unfeasible. One that is pointed to often is the adoption of AI watermarking, which labels AI generated content. However, months later, there is still no established method to make this identification due to how effortless it is to either evade it or attach a fake watermark to a real product.
Strides for AI regulation are also being made beyond the United States, as shown by the U.K.’s AI Safety Summit. The event invited 150 governmental and industry leaders to act on the focal point of moving forth in the industry in a global effort. Of the main takeaways from the conference, the biggest was that future strides in AI regulation should focus on people, not technology.
Where does this leave companies within the industry?
The leading figure in the industry, Nvidia, is a technology company that specializes in the production of chips/GPUs – evidently shown in the company’s 80% market share in the global market.
These GPUs are a key component of functional artificial intelligence that contain a large amount of fast, efficient transistors. The recent “boom” in regenerative AI exposed a clear dependence other tech companies have on Nvidia. The special model of chip that Nvidia has polished over the past several years marks itself as the cream of the crop, while others look into trying to compete and man their own production to a similar degree.
Google, Microsoft, and Amazon have made substantial investments in AI research and development in an effort to create ripples within the industry. Google’s deep learning subsidiary, DeepMind, has achieved breakthroughs in areas such as natural language processing and game playing, which are all meant to benefit society through its complex demeanor. Amazon made a public statement that they would invest up to $4 billion into the AI start-up, Antropic, to replicate the caliber of Nvidia’s quality and production. Google has already agreed to commit $2 billion to the same company.
Amidst this, seven leading AI companies in the United States – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI – have voluntarily agreed to comply with safeguards to manage the risks of AI development despite rampant competition.
The goal of responsible innovation points us to the ethics regarding AI, which are crucial to address sooner rather than later. Amongst them, algorithmic bias, the potential for job displacement due to automation, and the erosion of privacy rights have given rise to fundamental questions that these guiding figures in the industry must face head-on.
Algorithm bias refers to human bias that taints an AI system and is further perpetuated through production. This issue within the industry poses a noticeable challenge of reducing AI potential, and hindering peoples’ ability to be a part of society if these biases go unaddressed. In our healthcare system, Computer-Aided Diagnosis (CAD) systems return lower accuracy results for black patients than white patients. These biases may continue to be operated on, which leads to disparities in diagnosis, treatment, and overall healthcare outcomes between groups.
The fear of AI taking jobs from society stems from concerns about the potential displacement of workers across various industries, leading to unemployment and economic instability. While proponents argue that AI can enhance efficiency and productivity, the societal consequences of job displacement would be just as prominent. Forrester Research, an Information Technology (IT) company, asserted that 2.4 million jobs would be replaced by AI by 2030. The inconvenience many individuals will face in the workforce by this phenomenon hasn’t been met with a definite solution by tech companies or legislators besides having to bluntly embrace its arrival.
While these companies vie for leadership, it’s essential to recognize that collaboration and ethical considerations must underpin the race to lead the AI industry. A decentralized, collaborative approach involving industry and regulatory bodies is crucial for realizing the full potential of AI.
A decentralized, collaborative approach involving industry and regulatory bodies is crucial for realizing the full potential of AI.