The Hype, Greed and Power Fueling the AI Craze, Disregarding Humanity’s Needs

In the decades since its inception, the topic of artificial intelligence (AI) has been the subject of alternating enthusiasm and disillusionment over its scientific discoveries, technical innovations, and socioeconomic effects. New claims for the transformative and disruptive consequences of AI have been made in light of recent successes. AI is changing how we interact with technology and other people, how we view ourselves, and how we approach things like banking, democracy, and the legal system. However, despite the fact that AI in all of its manifestations is now pervasive and has increased benefits for both society and the individual, its effects are different. We need to be talking about and getting ready for the profound economic, legal, political, and regulatory effects that artificial intelligence will have on our society.

Read Also: New Way to Test for Consciousness in Humans, Animals, and Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

The difficulties that must be overcome include figuring out who is at fault when an autonomous car injures a pedestrian and managing a worldwide arms race in driverless vehicles, to name just a few. Without a doubt, artificial intelligence will change the way we operate. The alarmist headlines highlight the loss of jobs to robots, but the real struggle for people is to find their passion with new duties that call on their distinctively human skills. Another concern is making sure AI doesn’t become so adept at performing the task for which it was created that it violates moral or legal bounds. Although the AI’s initial purpose and goal is to serve humans, society would suffer if it chose to accomplish that goal in a harmful manner. These problems could really be challenging.

Greed and Power: The Driving Forces

Organizations and their leaders instantly consider how they might profit from any new technology that is developed. Unfortunately, the desire to use technology for the greater good frequently loses out to this unquenchable avarice. The same is true with generative AI, which is open to abuse and is currently entangled at the nexus of innovation and morality.

Due to the funding, research, and development methods used in AI, it poses significant regulatory problems. AI development is primarily driven by the commercial sector, and governments rely heavily on large tech firms to create their AI software, provide their AI experts, and make significant advances in AI. Given that large IT companies have the necessary resources and knowledge, this in many ways reflects the environment in which we live.

The great potential of AI will, however, be effectively outsourced to commercial interests without government regulation. That result offers little motivation to utilize AI to tackle the world’s most pressing problems, such as hunger, poverty, and climate change.

Read Also: AI Successfully Designs Artificial DNA For Drug Development

Disregarding Humanity’s Needs

The primary issue is not AI technology per se but rather how top companies, who heavily influence the development of AI technology, approach data, and its use. Consider the application of big data and machine learning techniques to marketing and product development. Although in theory, these techniques could be advantageous for customers, such as by enhancing product quality and permitting customization, they may ultimately have a number of negative implications on consumer welfare. To begin with, businesses that gather more data about their clients may use it to differentiate prices, thereby taking more of the rents that would have otherwise gone to clients. The collection of consumer data can lessen price competitiveness in an oligopolistic market. It seems to make sense that this could occur if a company with superior knowledge uses price discrimination to make its core customers less desirable to other companies, leading those companies to raise their prices. Of course, this price pressure will further harm consumer welfare.

The consequences of AI-based technology on the labor market could be considerably more negative. Numerous studies indicate that the rapid adoption and deployment of automation technologies, which displace low- and middle-skill workers from the tasks they once performed, is a contributing factor to the rise in labor market inequality in the US and numerous other advanced nations.

More significantly, automation causes a shift in the power equation from labor to capital, which can have a significant impact on how democratic institutions work. To put it another way, automation may harm democracy by eliminating the need for labor in the production process, which is necessary for democratic politics, which depend on diverse forms of labor and capital having opposing forces against one another.

Politicians and AI Developers

Currently, politicians are putting their money where the hype is; AI. It has the capacity to upset the distribution of power among countries. AI has the potential to develop new weaponry, automate industries, and improve surveillance, giving certain nations an edge over others. AI has the potential to influence elections and manipulate public opinion, which might undermine political systems and spark violence. All these are hidden agendas of some myopic politicians. People need to watch out for authoritarian regimes using artificial intelligence as it grows more potent to consolidate power and target particular demographics.

The Farce of Senate Hearings

Recent congressional hearings with representatives from the tech sector have been best characterized as hostile. On Capitol Hill, politicians angry with their corporations have publicly criticized Mark Zuckerberg, Jeff Bezos, and other digital titans. Sam Altman, the CEO of the San Francisco start-up OpenAI, appeared before senators on a Senate panel and broadly agreed with them that the more potent A.I. technology being developed inside his business and others like Google and Microsoft needs to be regulated.

However, it was uncertain how the push for A.I. regulation would be met by legislators. Congress has a poor track record of passing technology regulations. When it comes to laws governing speech, privacy, and child protection, the United States has lagged behind. A.I. rules are also behind, governments are currently playing catch-up as AI applications are created and released. The regulation of AI and the usage of data lack a coherent regulatory framework despite the international nature of this technology.

Governments must implement adequate regulation to serve as “guardrails” for the growth of the private sector. However, this is not yet in place, either in the US (where the most development is occurring) or in the majority of other regions of the world. The ‘vacuum’ created by this regulation has important ethical and security ramifications for AI.

Some governments worry that implementing strict restrictions will stifle innovation and investment in their nations, costing them a competitive edge. This mindset runs the risk of a “race to the bottom,” as nations try to reduce regulation in order to entice major investments in technology.

The session was the first in a series to learn more about the possible advantages and disadvantages of artificial intelligence in order to eventually “write the rules” for it, according to Connecticut senator and Democrat Richard Blumenthal, who serves as the panel’s chair. He also acknowledged that in the past, Congress had been behind in adopting regulation to meet new technologies.

Read Also: Prognosticating Lung Cancer with Artificial Intelligence Is Now Possible

The Shortcomings of AI Developers

Although using AI in software development could be revolutionary, there are certain moral and practical issues to consider that developers overlook. According to the cognitive psychologist, chatbots may soon surpass the amount of knowledge that human brains can store. Events like the Hiroshima and Nagasaki bombings are evidence of the downside of human inventions. Without the need for costly, specialized equipment, terrorist organizations, and rogue states might simply access AI technology. Elon Musk and the late Stephen Hawking are two prominent scientists and engineers who have expressed concern about the dangers posed by the rapid advancement of artificial intelligence (AI).

Policymakers should start considering limiting AI development now that the community is demanding legislative action in order to prevent repeating history. Similar to earlier technologies, well-designed regulation can reduce expensive externalities, whereas poorly thought-out regulation can impede development. To adopt protocols that harmonize AI with human values without unnecessarily burdening developers, policymakers must work closely with researchers.

Guidelines to address the possible risks of the technology are already being discussed in the newly developing discipline of AI safety. Major scientific conferences have featured sessions on AI safety and ethics, and numerous books and articles have been published on the subject. Regulators can manage AI threats by comprehending researchers’ worries, and the advantages of the technology will far outweigh the risks.

Read Also: New Artificial Neurons Push the Frontiers of the Man-Machine Interface

Final thoughts

The history of AI is rife with unethical transgressions, including bias, privacy violations, and unchallengeable AI decision-making. Determining and reducing ethical concerns is crucial both during the design and development of AI and after it is put to use. Equally crucial is that governments employ AI in a respectful, moral manner that complies with their commitments under human rights law. In addition, everyone needs to be aware of the advantages of secure, tightly controlled AI systems. The greatest way to foster trust in AI is through transparent legislation that is informed by a vocal, involved public.

References

AI and Society | Why is AI Good For Society | Is AI Good for Society (amacad.org)

What Is The Impact Of Artificial Intelligence (AI) On Society? | Bernard Marr

Dangers of unregulated artificial intelligence | CEPR

Sam Altman, ChatGPT Creator and OpenAI CEO, Urges Senate for AI Regulation – The New York Times (nytimes.com)

Do the benefits of artificial intelligence outweigh the risks? (economist.com)

FEEDBACK:

Conversation

Want to Stay Informed?

Join the Gilmore Health News Newsletter!

Want to live your best life?

Get the Gilmore Health Weekly newsletter for health tips, wellness updates and more.

By clicking "Subscribe," I agree to the Gilmore Health and . I also agree to receive emails from Gilmore Health and I understand that I may opt out of Gilmore Health subscriptions at any time.