The AI arms race highlights the urgent need for responsible innovation

Date:

Share post:

The recent uproar surrounding language processing tools like ChatGPT has caused numerous organizations to rush to produce recommendations for the responsible utilization of their products. For example, the online publication platform Medium has issued a statement on the topic of AI-generated writing that encourages “transparency” and “disclosure.”

My own organization has created a page of frequently asked questions (FAQs) about generative AI. On this website, it encourages educators to make “wise and ethical use” of AI and chatbots.

In light of this week’s release of the more powerful GPT-4, which runs the risk of being a misinformation and propaganda machine, these ethical measures seem rather antiquated. OpenAI asserts that GPT-4 was able to pass a simulated bar exam in the top 10%, in contrast to GPT-3.5, which only earned in the bottom 10% of the candidates.

Innovation that is not controlled

The strength behind ChatGPT comes from a supercomputer and an advanced cloud computing platform, both of which were developed with financial backing from Microsoft. This partnership between Microsoft and OpenAI will speed up the worldwide distribution of generative AI products by utilizing the Azure cloud platform from Microsoft.

It may be a coincidence, but the release of GPT-4 occurred less than two months after Microsoft disbanded a team working on ethics and societal issues. Members of the team who were frustrated by the decision stated that it was made due to pressure from the C-suite of Microsoft, which emphasized the need to bring AI products “into the hands of customers at a very high speed.”

It’s possible that “move fast and break things,” the motto that was once mocked in Silicon Valley, will become popular again.

For the time being, Microsoft will continue to operate its Office of Responsible AI. But as this high-stakes game of uncontrolled innovation continues to rage on, it seems appropriate to inquire about what exactly is meant by the term “responsible innovation.”

Responsible creativity

After I inquired about the definition of responsible innovation on ChatGPT, I received the following response: “The process of developing and implementing new technologies, processes, or products in a way that addresses ethical, social, and environmental concerns.” It entails taking into consideration the possible effects and risks that innovation may have on a variety of stakeholders, such as customers, employees, neighborhoods, and the environment.

The description provided by ChatGPT is correct, but it lacks any context. Whose ideals are these, and how are we going about putting them into action? To put it another way, who is accountable for innovation that is responsible?

During the course of the last decade, a variety of organizations, including corporations, think tanks, and academic establishments, have established responsible innovation initiatives with the goals of predicting and mitigating the adverse effects of the progression of technology.

In 2018, Google established a squad dedicated to responsible innovation with the intention of utilizing “experts in ethics, human rights, user research, and racial justice.” The most significant contribution that this group has made to Google is the creation of accountable AI principles. However, beyond this particular issue, the company’s ethical character can be called into question.

Concerns have been raised about Google’s ability to self-police in light of the company’s collaboration with the United States military as well as its treatment of two former employees who were committed to ethical principles.

In point of fact, the efforts of Google’s own employees at the grass-roots level have been the company’s most significant contribution to responsible innovation. This indicates that accountable innovation may require a grass-roots approach to its development. However, in a time when there have been widespread layoffs in the technology business, this is an ambitious goal.

Ethics-washing

According to the Code of Ethics and Professional Conduct developed by the Association for Computing Machinery (ACM), individuals working in the field of information technology have a responsibility to ensure that their innovations benefit society. But what motivates those who work in the tech industry to be “good” if they do not receive support from their supervisors, guidance from ethics experts, or regulation from government agencies? When it comes to self-auditing, can we put our faith in the computer industry?

Another problem that can arise with self-auditing is something known as “ethics washing,” which occurs when businesses only give ethics verbal service. The efforts that Meta has made toward responsible innovation are an excellent example of this.

In June of 2021, the top executive in charge of product design at Meta lauded the responsible innovation team, which she had helped establish in 2018. She also lauded Meta’s “commitment to making the most ethically responsible decisions possible, every day.” By September 2022, her squad was no longer functioning as a unit.

At the moment, the Meta store employs responsible innovation as a catchphrase for their advertising campaigns. The Responsible AI team at Meta was also disbanded in 2021 and its members were absorbed into the Social Impact group, which assists charitable organizations in making use of Meta’s products.

This change from responsible innovation to social innovation is an example of ethics washing, which is a strategy that conceals unethical behavior by shifting the focus onto charitable giving. Because of this, it is extremely important to differentiate “tech for good” as the responsible creation of technology from the now-common philanthropic public relations phrase “tech for good.”

Innovation that is both responsible and profitable

It shouldn’t come as a surprise that those outside of corporate culture have been the source of the most sophisticated demands for responsible innovation.

Values such as self-awareness, fairness, and justice are discussed in the principles that are outlined in a white paper published by the Information and Communications Technology Council (ICTC), a non-profit organization based in Canada. These are concepts that are more recognizable to philosophers and ethicists than they are to CEOs and founders of new businesses.

The principles outlined by the ICTC require that those responsible for the development of technology go above and beyond the mitigation of negative consequences and instead work to reverse societal power imbalances.

One may wonder how the application of these principles relates to the most recent advancements in generative artificial intelligence. Who exactly is meant to be included in the category of “everyone” when OpenAI makes the assertion that it is “developing technologies that empower everyone”? And in what kind of situation will this alleged “power” be exercised?

The work of philosophers like Ruha Benjamin and Armond Towns, who are skeptical of the word “everyone” in these contexts, and who question the very identity of the “human” in human-centered technology, are reflected in these questions.

These kinds of considerations would bring the AI race to a halt, which might not be such a disastrous conclusion after all.

Value conflicts

Within the realm of information technology, there exists a consistent source of friction between monetary assessment and moral principles. These conflicts were initially addressed through the establishment of responsible innovation initiatives, but as of late, these kinds of efforts have been largely ignored.

The reaction of conservative pundits in the United States to the recent failure of Silicon Valley Bank is tangible evidence of the tension that exists. The “woke outlook” of the bank, as well as its dedication to responsible investing and equity initiatives, have been incorrectly blamed by several prominent Republicans, including Donald Trump, for the current state of affairs.

In contrast to what President Trump refers to as “common sense business practices,” according to Bernie Marcus, co-founder of Home Depot, “these banks are badly run because everybody is focused on diversity and all of the woke issues.”

It is possible that the future of responsible innovation will be determined by the degree to which so-called “common sense business practices” can be influenced by “woke” problems such as ethical, social, and environmental concerns. If one can simply brush aside ethical concerns by labeling them as “woke,” then the potential for responsible innovation in the future is about as bright as that of the CD-ROM.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

sixteen − three =

This site uses Akismet to reduce spam. Learn how your comment data is processed.

[tds_leads input_placeholder="Email address" btn_horiz_align="content-horiz-center" pp_checkbox="yes" pp_msg="SSd2ZSUyMHJlYWQlMjBhbmQlMjBhY2NlcHQlMjB0aGUlMjAlM0NhJTIwaHJlZiUzRCUyMiUyMyUyMiUzRVByaXZhY3klMjBQb2xpY3klM0MlMkZhJTNFLg==" msg_composer="success" display="column" gap="10" input_padd="eyJhbGwiOiIxNXB4IDEwcHgiLCJsYW5kc2NhcGUiOiIxMnB4IDhweCIsInBvcnRyYWl0IjoiMTBweCA2cHgifQ==" input_border="1" btn_text="I want in" btn_tdicon="tdc-font-tdmp tdc-font-tdmp-arrow-right" btn_icon_size="eyJhbGwiOiIxOSIsImxhbmRzY2FwZSI6IjE3IiwicG9ydHJhaXQiOiIxNSJ9" btn_icon_space="eyJhbGwiOiI1IiwicG9ydHJhaXQiOiIzIn0=" btn_radius="0" input_radius="0" f_msg_font_family="521" f_msg_font_size="eyJhbGwiOiIxMyIsInBvcnRyYWl0IjoiMTIifQ==" f_msg_font_weight="400" f_msg_font_line_height="1.4" f_input_font_family="521" f_input_font_size="eyJhbGwiOiIxMyIsImxhbmRzY2FwZSI6IjEzIiwicG9ydHJhaXQiOiIxMiJ9" f_input_font_line_height="1.2" f_btn_font_family="521" f_input_font_weight="500" f_btn_font_size="eyJhbGwiOiIxMyIsImxhbmRzY2FwZSI6IjEyIiwicG9ydHJhaXQiOiIxMSJ9" f_btn_font_line_height="1.2" f_btn_font_weight="600" f_pp_font_family="521" f_pp_font_size="eyJhbGwiOiIxMiIsImxhbmRzY2FwZSI6IjEyIiwicG9ydHJhaXQiOiIxMSJ9" f_pp_font_line_height="1.2" pp_check_color="#000000" pp_check_color_a="#309b65" pp_check_color_a_h="#4cb577" f_btn_font_transform="uppercase" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjQwIiwiZGlzcGxheSI6IiJ9LCJsYW5kc2NhcGUiOnsibWFyZ2luLWJvdHRvbSI6IjMwIiwiZGlzcGxheSI6IiJ9LCJsYW5kc2NhcGVfbWF4X3dpZHRoIjoxMTQwLCJsYW5kc2NhcGVfbWluX3dpZHRoIjoxMDE5LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMjUiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" msg_succ_radius="0" btn_bg="#309b65" btn_bg_h="#4cb577" title_space="eyJwb3J0cmFpdCI6IjEyIiwibGFuZHNjYXBlIjoiMTQiLCJhbGwiOiIwIn0=" msg_space="eyJsYW5kc2NhcGUiOiIwIDAgMTJweCJ9" btn_padd="eyJsYW5kc2NhcGUiOiIxMiIsInBvcnRyYWl0IjoiMTBweCJ9" msg_padd="eyJwb3J0cmFpdCI6IjZweCAxMHB4In0=" msg_err_radius="0" f_btn_font_spacing="1"]
spot_img

Related articles

The Next Frontier of AI: Microsoft Research’s Vision for 2026 and Beyond

Discover the concrete advances set to transform AI in 2026. From light-speed infrastructure and AI lab assistants that run experiments, to agents that negotiate in digital marketplaces and models that design new proteins, this article details the practical next steps that will make AI more powerful, collaborative, and integrated into solving our world's complex challenges.

The BRICS Currency Initiative: Reshaping Global Finance in an Age of Economic Polarization

In the intricate tapestry of global finance, the U.S. dollar has reigned supreme for nearly eight decades, serving...

TRUMP’S TARIFF TERRORISM – A SELF-DESTRUCTIVE ASSAULT ON ASIA AND AMERICA

Targeting India and China with Economic Threats, Ignoring History and Hypocrisy, While Ignoring the Coming Storm of Inflation and Job Losses

The Sabotage of India’s Economic Ascendancy: How Reservation Policies in the Private Sector Threaten National Prosperity

Let's exposes the reckless agenda of Indian political parties pushing for private sector reservations, a policy that risks plunging India into economic stagnation by undermining meritocracy and fueling caste-based division, backed by robust statistical evidence and critical analysis.