Let us go back to 1956, when the prospect of creating Artificial Intelligence (AI) outweighed any of its conceivable downsides. Who could possibly predict that something viewed as life-saving could turn into something potentially life-taking? It was a breakthrough idea; the dream that was making itself into the real world. But like many dreams, it met with a harsh reality and had to redefine itself.
The term ‘Artificial Intelligence’ was coined during a multi-week conference called the Dartmouth Workshop. This was a collaboration of six of the most influential researchers of the time: Nathaniel Rochester, Marvin Minsky, John McCarthy, Ray Solomonoff, and Claude Shannon. They gathered in Dartmouth Hall in New Hampshire in order to discuss Artificial Intelligence and other prominent fields of study at the time. The news of this conference spread worldwide and is today considered the cradle of Artificial Intelligence.
During the 1950s, the first advancements in computers and other technology were made. Alan Turing, a mathematician and computer scientist, published the “Imitation Game,” the first paper to introduce the concept of AI. Arthur Samuel, another influential computer scientist, set out to develop a program to play checkers. In 1952, it became the first computerized game to ever be developed, yet far from the last.
This queued the period known as AI Maturation. In 1954, American inventor and engineer George Devol set out to create the first ever industrial robot. This robot was used to automate the process of welding and metalworking in car industries. This was one of the many robots created at the time. Technology held more power now that it could be used to minimize human labor and get jobs done. For the rest of the 20th century, progress in the field of Artificial Intelligence grew substantially. Conferences were held, associations were formed, more and more people got involved; the idea was getting traction. It was almost as if scientists started “cracking the code.”
Once scientists realized that robots could be programmed to do risky tasks and perform surgical procedures, it offset another period of learning and development of AI systems. This is when the robotic arm Puma 560 came out. Computed technology guided this arm to insert a needle during a brain biopsy, a procedure used to diagnose brain lesions. Previously a dangerous procedure, the invention of this robot reduced risk factors associated with the shaky hands of doctors.
Successful implementations like this were exciting and gave rise to other experiments. But along with successes came the ability to manipulate the power associated with these machines.
Around this time companies began experimenting with drug-creating technologies. In 2022 Collaboration Pharmaceuticals, a privately owned company they announced their Molecular Design Software, MegaSyn, initially developed to comb through molecular constructions to choose and create cures for diseases and illnesses. Scientists could input information about the disease, and within minutes, the machine would produce thousands of molecular combinations. The robot was programmed to filter out solutions whose side effects are worse than the disease itself.
Sean Ekins, one of the developers of this robot, wanted to see if MegaSyn could be used in a negative way. He switched the code for the robot; instead of programming it to produce treatments with minimal side effects, he made it produce lethal treatments. He inputted the information and came back a couple hours later to see the results; over 40,000 results were produced, each more toxic than the other. The drugs it created were molecular combinations that have never been seen before. This experiment opened up some scary possibilities for us humans.
In the meantime, Artificial Technology was quickly becoming part of people’s lives. And so did questions about the harmful effects of AI such as potential bias, manipulation, privacy, ethical considerations. What would AI decisions look like? What impact would intelligence void of creativity and empathy mean to us? Can AI outsmart us? With every new innovation comes an inherent risk. With AI that risk seemed manifold.
The RAND (Research and Development) corporation, began asking some of these questions. Initially created post World War II, the company’s initial goal was to address challenges associated with weapons to create a safer environment. The exponential growth of AI caught the company’s attention, and they began to look into the potential solutions and ways to minimize the risk.
To target this issue, RAND produced a simulation to observe AI in simplified settings. The design of this technology allowed Artificial Intelligence to be “projected” into the future and assessed for any complications. Compared with earlier development of AI without risk assessment, RAND’s idea was an effort to harness at least some of the risk. Humans were able to see both the pros and cons of technology before setting it off into the world, ensuring a layer of protection and providing safeguards to regulate the AI itself. Scientists could both understand and learn about the consequences as well as improve communication and transparency.
So are we there yet? Have we learned to leverage Artificial Intelligence while preventing all possible risk? Absolutely not. We are learning about the vast implications of using AI, our government attempts put forward some regulations around it but we have a long way to go. AI is evolving a lot more rapidly than our attempts to mitigate its risks.
Many scientists are now finding ways to improve their collaboration and communication; these major changes in our society need to be discussed.
Here is what Chat GPT has to say about it: “The responsible development, deployment, and use of AI are crucial in ensuring that it does not become ‘out of control’ or ‘scary.’ It’s a balance between harnessing the benefits of AI while addressing its potential negative consequences. Public awareness, education, and participation in these discussions are essential for shaping the future of AI in a way that aligns with societal values and goals.” It’s not wrong.
One thing is clear. We must continue to ask questions and seek answers. We, as a society, should make every effort to use AI responsibly while taking advantage of its immense potential. While there will always be bad actors, the collective effort to minimize the actual risk should evolve into real-life applications, regulations and laws. Hopefully, we don’t resort to ChatGPT to do that.
Who could possibly predict that something viewed as life-saving could turn into something potentially life-taking?