Google declares ChatGPT a “Code Red” and requests that rivals be created
OpenAI’s new ChatGPT has compelled Google management to issue a “code red” even though it is still in the test-only preview phase. A prevalent worry in Silicon Valley and the world of technology is that the company is on the verge of a technology shift that might completely transform the industry. Even though ChatGPT has only been available to the public for three weeks, The New York Times says that even though it sometimes gives out wrong or harmful information, the bot has forced Google to make a competitor to deal with the “first significant threat to its main search business.”
In reaction to the perceived danger posed by ChatGPT, according to The New York Times, Google CEO Sundar Pichai has organized a series of meetings to reinforce the company’s artificial intelligence policy, resulting in changes inside the organization. Employees have also been challenged to create alternatives to technology like DALL-E. It should be mentioned that LaMDA, or Language Model for Dialogue Applications, is a working chatbot that Google already has that might compete with ChatGPT. In reality, Google researchers created the technology that forms the “heart” of ChatGPT. Blake Lemoine, a Google engineer, claimed Google’s chatbot was sentient, although this was ultimately proven to be untrue, and this attracted much attention.
Despite initially seeming straightforward, Google’s current situation is far more complicated than one might initially think because, even if chatbot technology advances, Google must still take into account the potential effects on its search ad revenue. This is because users may be less likely to click on advertising links if chatbots are able to respond to user questions succinctly. Former Yahoo and Google employee Amr Awadallah, who now leads a business called Vectara that is creating comparable technologies, asserted that Google has a problem with its economic model. Users won’t click on any advertising if they give the correct response to every question.
According to The New York Times, a lot of analysts think Google will adopt a strategy that is more “incremental” than a complete “overhaul.” Furthermore, Google has been hesitant to widely disseminate its technologies, such as LaMDA, due to the possibility of producing unreliable, harmful, and biased information. LaMDA is now only accessible to a small group of users through the experimental AI Test Kitchen app.
There are other companies in the trillion-dollar club besides Google that have experienced this issue. Similar problems have happened to other businesses. Microsoft released the Tay chatbot in 2015, but it was swiftly taken down from the internet because of its offensive language. A chatbot was removed by Meta more recently for comparable reasons.