Samsung Galaxy S24 Series to Launch With AI Capabilities, Might Outperform Pixel 8 Series: Report

Google’s Pixel 8 series debuted in the first week of October with several generative AI features. Now, early rumours suggest that Samsung is eying to rival the new Pixel phones by bringing more AI features to the upcoming Galaxy S24 series. Samsung could pack generative AI tools resembling ChatGPT and Google Bard on the Galaxy S24, Galaxy S24 Plus, and Galaxy S24 Ultra smartphones. With this development, users will be able to create text from prompts and text-to-image generative AI in their Galaxy S24 devices. Samsung Galaxy S24 series could go official early next year.

A report by SamMobile citing industry sources claims that Samsung wants to make the upcoming Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra “the smartest AI phones ever”, even surpassing Google’s latest Pixel 8 lineup. The South Korean electronic giant will reportedly use features resembling AI tools like ChatGPT and Google Bard in the flagship phones. This would allow Galaxy S24 users to create content and stories based on a few keywords.

The Galaxy S24 and Galaxy S24+ are tipped to feature Samsung’s Exynos 2400 chipset in some regions like Canada and the US, while the Galaxy S24 Ultra could run on Snapdragon 8 Gen 3 SoC across all markets. The company’s in-house processor could offer text-to-image generative AI features and some of these functionalities are likely to be available both online and offline. Speech-to-text functionality is also expected to see improvements.

Samsung’s virtual assistant Bixby is expected to have humanlike and smarter conversations with the user. Bixby could get more AI capabilities preparing it to rival Google Assistant and Bard. The new Exynos and Snapdragon chips in the Galaxy S24 series are expected to outperform the Pixel 8’s Tensor G3 chipset in AI tasks.

Samsung is speculated to announce the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra smartphones in January next year. It could come with Titanium frames instead of aluminium. The Galaxy S24 Ultra is said to come with a new EV battery technology to offer improved battery life.


Samsung’s Galaxy S23 series of smartphones was launched earlier this week and the South Korean firm’s high-end handsets have seen a few upgrades across all three models. What about the increase in pricing? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft CEO Says Google Locking Up Content Needed to Train AI While Tech Giants Compete Hard

Microsoft chief executive Satya Nadella said Monday tech giants were competing for vast troves of content needed to train artificial intelligence, and complained Google was locking up content with expensive and exclusive deals with publishers.

Testifying in a landmark US trial against its rival Google, the first major antitrust case brought by the US since it sued Microsoft in 1998, Nadella testified the tech giants’ efforts to build content libraries to train their large language models “reminds me of the early phases of distribution deals.”

Distribution agreements are at the core of the US Justice Department’s antitrust fight against Google. The government says that Google, with some 90 percent of the search market, illegally pays $10 billion (nearly Rs. 83,200 crore) annually to smartphone makers like Apple and wireless carriers like AT&T and others to be the default search engine on their devices. 

The clout in search makes Google a heavy hitter in the lucrative advertising market, boosting its profits.

Nadella said building artificial intelligence took computing power, or servers, and data to train the software. On servers, he said, “No problem, we are happy to put in the dollars.”

But without naming Google, he said it was “problematic” if other companies locked up exclusive deals with big content makers. 

“When I am meeting with publishers now, they say Google’s going to write this check and it’s exclusive and you have to match it,” he said.

Rebuffed by Apple

Nadella also testified that Microsoft had sought to make its Bing search engine the default on Apple smartphones but was rebuffed. 

John Schmidtlein, Google’s lead lawyer, pressed Nadella on occasions when Microsoft did win default status on computers and mobile phones but users still bypassed Bing and continued to use Google by a wide margin.

Schmidtlein argued that Microsoft had made a series of strategic errors that led to Bing’s inability to grab a foothold, including a failure to invest in servers or engineers to improve Bing and a failure to see the mobile revolution.

Schmidtlein also said Microsoft’s success in becoming the default — on some Verizon phones in 2008, and BlackBerry and Nokia in 2011 — ended with the same result: users bypassed Bing and did the vast majority of their searches on Google.

On laptops, most of which use Microsoft operating systems, Bing is the default search engine and has a market share below 20 percent, Nadella acknowledged.

“You get up in the morning and you brush your teeth and you search on Google,” he added in a reference to Google’s dominance in search.

Question of quality

Judge Amit Mehta, who will decide the case being tried in the US District Court for the District of Columbia, asked Nadella why Apple would switch to Bing given the Microsoft product’s lower quality.

The question suggests Google’s argument — that it is dominant because of its quality and not because of illegal activity — has caught the judge’s interest.

Nadella became CEO of Microsoft in 2014, long after the tech giant faced its own federal antitrust lawsuit. That court fight, which ended in a 2001 settlement, forced Microsoft to end some business practices and opened the door to companies like Google.

As Google, which was founded in 1998, became an industry leading search engine, the two became bitter rivals. Both have browsers, search engines, email services and a host of other overlaps. They became rivals in artificial intelligence more recently, with Microsoft investing heavily in OpenAI and Google building the Bard AI chatbot among other investments.

© Thomson Reuters 2023


Samsung launched the Galaxy Z Fold 5 and Galaxy Z Flip 5 alongside the Galaxy Tab S9 series and Galaxy Watch 6 series at its first Galaxy Unpacked event in South Korea. We discuss the company’s new devices and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Bard Gets New Features to Catch Up With Rival ChatGPT’s Popularity

Alphabet‘s Google said on Tuesday that Bard, its generative artificial intelligence, will have the ability to fact-check its answers and analyze users’ personal Google data as the tech giant scrambles to catch up to ChatGPT in popularity.

The release last year of ChatGPT, a chatbot from Microsoft-backed OpenAI, sparked a race in the tech industry to give consumers access to generative AI technology. At the time, ChatGPT was the fastest-growing consumer application ever and is now one of the top 30 websites in the world.

Bard has not taken off in the same way. In August, it received 183 million visits, 13 percent of what ChatGPT received, according to website analytics firm Similarweb. 

As it seeks to gain ground in the fast-moving AI space, Google is rolling out Bard Extensions, enabling users to import their data from other Google products. For instance, users could ask Bard to search their files in Google Drive or provide a summary of the user’s Gmail inbox. For now, Bard users will only be able to pull information in from Google apps, but Google is working with external companies to connect their applications into Bard in the future, Google senior product director Jack Krawczyk said.

Another new feature in Bard seeks to alleviate a nagging problem for generative AI: inaccurate responses known as “hallucinations”. Bard users will be able to see which parts of Bard’s answers differ from and agree with Google search results. 

“We are presenting (Bard) in a way that it admits when it’s not confident,” Krawczyk said, explaining that the intention is to build users’ trust in generative AI through holding Bard accountable.

A third new feature allows users to invite others into Bard conversations.

© Thomson Reuters 2023  


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Rolls Out Its AI Chatbot, Bard, in Europe and Brazil to Take on Microsoft-Backed ChatGPT

Alphabet said it is rolling out its artificial- intelligence chatbot, Bard, in Europe and Brazil on Thursday, the product’s biggest expansion since its February launch and pitting it against Microsoft-backed rival ChatGPT.

Bard and ChatGPT are human-sounding programs that use generative artificial intelligence to hold conversations with users and answer myriad prompts. The products have touched off global excitement tempered with caution.

Companies have jumped onto the AI bandwagon, investing billions with the hope of generating much more in advertising and cloud revenue. Earlier this week, billionaire Elon Musk also launched his long-teased artificial-intelligence startup xAI, whose team includes several former engineers at Google, Microsoft and OpenAI.

Google has also now added new features to Bard, which apply worldwide.

“Starting today, you can collaborate with Bard in over 40 languages, including Arabic, Chinese, German, Hindi and Spanish,” Google senior product director Jack Krawczyk said in a blog post.

“Sometimes hearing something out loud can help you approach your idea in a different way … This is especially helpful if you want to hear the correct pronunciation of a word or listen to a poem or script.”

He said users can now change the tone and style of Bard’s responses to either simple, long, short, professional or casual. They can pin or rename conversations, export code to more places and use images in prompts.

Bard’s launch in the EU had been held up by local privacy regulators. Krawczyk said Google had since then met the watchdogs to reassure them on issues relating to transparency, choice and control.

In a briefing with journalists, Amar Subramanya, engineering vice president of Bard, added that users could opt out of their data being collected.

Google has been hit by a fresh class action in the US over the alleged misuse of users’ personal information to train its artificial intelligence system.

Subramanya declined to comment on whether there were plans to develop a Bard app.

“Bard is an experiment,” he added. “We want to be bold and responsible.”

Nonetheless, novelty appeal may be waning, with recent Web user numbers showing that monthly traffic to ChatGPT’s website and unique visitors declined for the first time ever in June.

© Thomson Reuters 2023


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Pixel Devices May Get Bard AI as a New Homescreen Widget, Tips APK Teardown

Bard, Google’s ChatGPT-esque chatbot could soon find its way to Pixel smartphones and tablets. Google is reportedly preparing to expand the reach of its generative artificial intelligence (AI) chatbot by adding it to the company’s devices through a widget. A recent APK (Android Package Kit) teardown of Bard’s code indicates the arrival of the chatbot as a homescreen widget. Bard could be either integrated into the Google Search app or arrive as a standalone app. Bard can now code in 20 programming languages including Java, C++ and Python.

As per an APK teardown by 9to5Google, Google is working to implement Bard AI on Pixel phones and tablets. As per the report, the search giant is planning to add a dedicated widget for its Bard AI right on the homescreen, exclusive to the company’s devices. Bard could be either integrated into the Google Search app or come as a standalone app. It is likely to include suggested prompts for conversations, opening directly into the corresponding app.

Bard’s potential integration into Pixel phones could set the groundwork for a broader rollout of AI to all Android devices in the future. Since this has not yet been confirmed by Google, the information should be taken with a pinch of salt.

Google started the public release of Bard in March this year to gain ground on Microsoft in a fast-moving race on AI technology. Currently, Google’s in-house competitor to OpenAI’s ChatGPT can be accessed by a small set of users. Bard can code in 20 programming languages including Java, C++ and Python, and can also help debug and explain code to users.

Google is expected to make AI-related announcements at this year’s I/O conference. The annual event that is set to begin on May 10 will see the launch of Pixel 7a and the much-anticipated Pixel Fold.


OnePlus recently launched its first tablet in India, the OnePlus Pad, which is only sold in a Halo Green colour option. With this tablet, OnePlus has stepped into a new territory that’s dominated by Apple’s iPad. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.


The Lord of the Rings: The Rings of Power Season 2 Production to Continue Amidst Writers’ Strike



Check out our Latest News and Follow us at Facebook

Original Source

Alphabet CEO Sundar Pichai Reaps $226 Million Compensation in 2022 Amid Layoffs

Alphabet Chief Executive Sundar Pichai received total compensation of about $226 million (roughly Rs. 1,850 crore) in 2022, more than 800 times the median employee’s pay, the company said in a securities filing on Friday.

Pichai’s compensation included stock awards of about $218 million (roughly Rs. 1,800 crore), the filing showed.

The pay disparity comes at a time when Alphabet, the parent company of Google, has been cutting jobs globally, The Mountain View, California-based company announced plans to cut 12,000 jobs around the world in January, equivalent to 6 percent of its global workforce.

Early this month, hundreds of Google employees staged a walkout at the company’s London offices following a dispute over layoffs.

In March, Google employees staged a walkout at the company’s Zurich offices after more than 200 workers were laid off.

Meanwhile, the company is working rapidly towards making its chatbot Bard stand out among the competitors. On Friday, Google announced that Bard, its generative artificial intelligence (AI) chatbot, to help people write code to develop software, as the tech giant plays catch-up in a fast-moving race on AI technology.

Bard will be able to code in 20 programming languages including Java, C++ and Python, and can also help debug and explain code to users, Google said on Friday.

The company said Bard can also optimise code to make it faster or more efficient with simple prompts such as “Could you make that code faster?”.

Currently, Bard can be accessed by a small set of users who can chat with the bot and ask questions instead of running Google’s traditional search tool.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google’s Rush to Take Its AI Chatbot Bard Public Led to Ethical Lapses, Employees Say

Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.

One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology’s potential harms. But the November 2022 debut of rival OpenAI’s popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.

That was a markedly faster pace of development for the technology, and one that could have profound societal impact. The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said. The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development, they said.

Google is aiming to revitalize its maturing search business around the cutting-edge technology, which could put generative AI into millions of phones and homes around the world — ideally before OpenAI, with the backing of Microsoft, beats the company to it.

“AI ethics has taken a back seat,” said Meredith Whittaker, president of the Signal Foundation, which supports private messaging, and a former Google manager. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

In response to questions from Bloomberg, Google said responsible AI remains a top priority at the company. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said Brian Gabriel, a spokesperson. The team working on responsible AI shed at least three members in a January round of layoffs at the company, including the head of governance and programs. The cuts affected about 12,000 workers at Google and its parent company.

Google, which over the years spearheaded much of the research underpinning today’s AI advancements, had not yet integrated a consumer-friendly version of generative AI into its products by the time ChatGPT launched. The company was cautious of its power and the ethical considerations that would go hand-in-hand with embedding the technology into search and other marquee products, the employees said.

By December, senior leadership decreed a competitive “code red” and changed its appetite for risk. Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said. Still, it needed to get its ethics teams on board. That month, the AI governance lead, Jen Gennai, convened a meeting of the responsible innovation group, which is charged with upholding the company’s AI principles.

Gennai suggested that some compromises might be necessary in order to pick up the pace of product releases. The company assigns scores to its products in several important categories, meant to measure their readiness for release to the public. In some, like child safety, engineers still need to clear the 100 percent threshold. But Google may not have time to wait for perfection in other areas, she advised in the meeting. “‘Fairness’ may not be, we have to get to 99 percent,” Gennai said, referring to its term for reducing bias in products. “On ‘fairness,’ we might be at 80, 85 percent, or something” to be enough for a product launch, she added.

In February, one employee raised issues in an internal message group: “Bard is worse than useless: please do not launch.” The note was viewed by nearly 7,000 people, many of whom agreed that the AI tool’s answers were contradictory or even egregiously wrong on simple factual queries.

The next month, Gennai overruled a risk evaluation submitted by members of her team stating Bard was not ready because it could cause harm, according to people familiar with the matter. Shortly after, Bard was opened up to the public — with the company calling it an “experiment”.

In a statement, Gennai said it wasn’t solely her decision. After the team’s evaluation she said she “added to the list of potential risks from the reviewers and escalated the resulting analysis” to a group of senior leaders in product, research and business. That group then “determined it was appropriate to move forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” she said.

Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety. Researchers building AI outnumber those focused on safety by a 30-to-1 ratio, the Center for Humane Technology said at a recent presentation, underscoring the often lonely experience of voicing concerns in a large organization.

As progress in artificial intelligence accelerates, new concerns about its societal effects have emerged. Large language models, the technologies that underpin ChatGPT and Bard, ingest enormous volumes of digital text from news articles, social media posts and other internet sources, and then use that written material to train software that predicts and generates content on its own when given a prompt or query. That means that by their very nature, the products risk regurgitating offensive, harmful or inaccurate speech.

But ChatGPT’s remarkable debut meant that by early this year, there was no turning back. In February, Google began a blitz of generative AI product announcements, touting chatbot Bard, and then the company’s video service YouTube, which said creators would soon be able to virtually swap outfits in videos or create “fantastical film settings” using generative AI. Two weeks later, Google announced new AI features for Google Cloud, showing how users of Docs and Slides will be able to, for instance, create presentations and sales-training documents, or draft emails. On the same day, the company announced that it would be weaving generative AI into its health-care offerings. Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.

The challenge of developing cutting-edge artificial intelligence in an ethical manner has long spurred internal debate. The company has faced high-profile blunders over the past few years, including an embarrassing incident in 2015 when its Photos service mistakenly labeled images of a Black software developer and his friend as “gorillas.”

Three years later, the company said it did not fix the underlying AI technology, but instead erased all results for the search terms “gorilla,” “chimp,” and “monkey,” a solution that it says “a diverse group of experts” weighed in on. The company also built up an ethical AI unit tasked with carrying out proactive work to make AI fairer for its users.

But a significant turning point, according to more than a dozen current and former employees, was the ousting of AI researchers Timnit Gebru and Margaret Mitchell, who co-led Google’s ethical AI team until they were pushed out in December 2020 and February 2021 over a dispute regarding fairness in the company’s AI research. Samy Bengio, a computer scientist who oversaw Gebru and Mitchell’s work, and several other researchers would end up leaving for competitors in the intervening years.

After the scandal, Google tried to improve its public reputation. The responsible AI team was reorganized under Marian Croak, then a vice president of engineering. She pledged to double the size of the AI ethics team and strengthen the group’s ties with the rest of the company.

Even after the public pronouncements, some found it difficult to work on ethical AI at Google. One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review. Managers protested that it was getting in the way of their “real work,” the person said.

Those who remained working on ethical AI at Google were left questioning how to do the work without putting their own jobs at risk. “It was a scary time,” said Nyalleng Moorosi, a former researcher at the company who is now a senior researcher at the Distributed AI Research Institute, founded by Gebru. Doing ethical AI work means “you were literally hired to say, I don’t think this is population-ready,” she added. “And so you are slowing down the process.”

To this day, AI ethics reviews of products and features, two employees said, are almost entirely voluntary at the company, with the exception of research papers and the review process conducted by Google Cloud on customer deals and products for release. AI research in delicate areas like biometrics, identity features, or kids are given a mandatory “sensitive topics” review by Gennai’s team, but other projects do not necessarily receive ethics reviews, though some employees reach out to the ethical AI team even when not required.

Still, when employees on Google’s product and engineering teams look for a reason the company has been slow to market on AI, the public commitment to ethics tends to come up. Some in the company believed new tech should be in the hands of the public as soon as possible, in order to make it better faster with feedback.

Before the code red, it could be hard for Google engineers to get their hands on the company’s most advanced AI models at all, another former employee said. Engineers would often start brainstorming by playing around with other companies’ generative AI models to explore the possibilities of the technology before figuring out a way to make it happen within the bureaucracy, the former employee said.

“I definitely see some positive changes coming out of ‘code red’ and OpenAI pushing Google’s buttons,” said Gaurav Nemade, a former Google product manager who worked on its chatbot efforts until 2020. “Can they actually be the leaders and challenge OpenAI at their own game?” Recent developments — like Samsung reportedly considering replacing Google with Microsoft’s Bing, whose tech is powered by ChatGPT, as the search engine on its devices — have underscored the first-mover advantage in the market right now.

Some at the company said they believe that Google has conducted sufficient safety checks with its new generative AI products, and that Bard is safer than competing chatbots. But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.

Teams working on the new AI features have been siloed, making it hard for rank-and-file Googlers to see the full picture of what the company is working on. Company mailing lists and internal channels that were once places where employees could openly voice their doubts have been curtailed with community guidelines under the pretext of reducing toxicity; several employees said they viewed the restrictions as a way of policing speech.

“There is a great amount of frustration, a great amount of this sense of like, what are we even doing?” Mitchell said. “Even if there aren’t firm directives at Google to stop doing ethical work, the atmosphere is one where people who are doing the kind of work feel really unsupported and ultimately will probably do less good work because of it.”

When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings — a stance that has been critiqued by some in the field as a form of marketing — rather than the day-to-day scenarios that already have the potential to be harmful.

El-Mahdi El-Mhamdi, a former research scientist at Google, said he left the company in February over its refusal to engage with ethical AI issues head-on. Late last year, he said, he co-authored a paper that showed it was mathematically impossible for foundational AI models to be large, robust and remain privacy-preserving.

He said the company raised questions about his participation in the research while using his corporate affiliation. Rather than go through the process of defending his work, he said he volunteered to drop the affiliation with Google and use his academic credentials instead.

“If you want to stay on at Google, you have to serve the system and not contradict it,” El-Mhamdi said.

© 2023 Bloomberg LP


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google’s Plan to Catch ChatGPT Is to Stuff AI Into Everything

Artificial intelligence was supposed to be Google’s thing. The company has cultivated a reputation for making long-term bets on all kinds of far-off technologies, and much of the research underpinning the current wave of AI-powered chatbots took place in its labs. Yet a startup called OpenAI has emerged as an early leader in so-called generative AI—software that can produce its own text, images or videos—by launching ChatGPT in November. Its sudden success has left Google parent company Alphabet sprinting to catch up in a key subfield of the technology that Chief Executive Officer Sundar Pichai has said will be “more profound than fire or electricity.”

ChatGPT, which some see as an eventual challenger to Google’s traditional search engine, seems doubly threatening given OpenAI’s close ties to Microsoft. The feeling that Google may be falling behind in an area that it has considered a key strength has led to no small measure of anxiety in Mountain View, California, according to current and former employees as well as others close to the company, many of whom asked to remain anonymous because they weren’t allowed to speak publicly. As one current employee puts it: “There is an unhealthy combination of abnormally high expectations and great insecurity about any AI-related initiative.”

The effort has Pichai reliving his days as a product manager, as he’s taken to weighing in directly on the details of product features, a task that would usually fall far below his pay grade, according to one former employee. Google co-founders Larry Page and Sergey Brin have also gotten more involved in the company than they’ve been in years, with Brin even submitting code changes to Bard, Google’s ChatGPT-esque chatbot. Senior management has declared a “code red” that comes with a directive that all of its most important products—those with more than a billion users—must incorporate generative AI within months, according to a person with knowledge of the matter. In an early example, the company announced in March that creators on its YouTube video platform would soon be able to use the technology to virtually swap outfits.

Some Google alumni have been reminded of the last time the company implemented an internal mandate to infuse every key product with a new idea: the effort beginning in 2011 to promote the ill-fated social network Google+. It’s not a perfect comparison—Google was never seen as a leader in social networking, while its expertise in AI is undisputed. Still, there’s a similar feeling. Employee bonuses were once hitched to Google+’s success. Current and former employees say at least some Googlers’ ratings and reviews will likely be influenced by their ability to integrate generative AI into their work. The code red has already resulted in dozens of planned generative AI integrations. “We’re throwing spaghetti at the wall,” says one Google employee. “But it’s not even close to what’s needed to transform the company and be competitive.”

In the end, the mobilization around Google+ failed. The social network struggled to find traction with users, and Google ultimately said in 2018 that it would shutter the product for consumers. One former Google executive sees the flop as a cautionary tale. “The mandate from Larry was that every product has to have a social component,” this person says. “It ended quite poorly.”

A Google spokesperson pushes back against the comparison between the code red and the Google+ campaign. While the Google+ mandate touched all products, the current AI push has largely consisted of Googlers being encouraged to test out the company’s AI tools internally, the spokesperson says: a common practice in tech nicknamed “dogfooding.” Most Googlers haven’t been pivoting to spend extra time on AI, only those working on relevant projects, the spokesperson says.

Google is not alone in its conviction that AI is now everything. Silicon Valley has entered a full-on hype cycle, with venture capitalists and entrepreneurs suddenly proclaiming themselves AI visionaries, pivoting away from recent fixations such as the blockchain, and companies seeing their stock prices soar after announcing AI integrations. In recent weeks, Meta Platforms CEO Mark Zuckerberg has been focused on AI rather than the metaverse—a technology he recently declared so foundational to the company that it required changing its name, according to two people familiar with the matter.

The new marching orders are welcome news for some people at Google, who are well aware of its history of diving into speculative research only to stumble when it comes to commercializing it. Members of some teams already working on generative AI projects are hopeful that they’ll now be able to “ship more and have more product sway, as opposed to just being some research thing,” according to one of the people with knowledge of the matter.

In the long run, it may not matter much that OpenAI sucked all the air out of the public conversation for a few months, given how much work Google has already done. Pichai began referring to Google as an “AI-first” company in 2016. It’s used machine learning to drive its ad business for years while also weaving AI into key consumer products such as Gmail and Google Photos, where it uses the technology to help users compose emails and organize images. In a recent analysis, research company Zeta Alpha examined the top 100 most cited AI research papers from 2020 to 2022 and found that Google dominated the field. “The way it has ended up appearing is that Google was kind of the sleeping giant who is behind and playing catch-up now. I think the reality is actually not quite that,” says Amin Ahmad, a former AI researcher at Google who co-founded Vectara, a startup that offers conversational search tools to businesses. “Google was actually very good, I think, at applying this technology into some of their core products years and years ahead of the rest of the industry.”

Google has also wrestled with the tension between its commercial priorities and the need to handle emerging technology responsibly. There’s a well-documented tendency of automated tools to reflect biases that exist in the data sets they’ve been trained on, as well as concerns about the implications of testing tools on the public before they’re ready. Generative AI in particular comes with risks that have kept Google from rushing to market. In search, for instance, a chatbot could deliver a single answer that seems to come straight from the company that made it, similar to the way ChatGPT appears to be the voice of OpenAI. This is a fundamentally riskier proposition than providing a list of links to other websites.

Google’s code red seems to have scrambled its risk-reward calculations in ways that concern some experts in the field. Emily Bender, a professor of computational linguistics at the University of Washington, says Google and other companies hopping onto the generative AI trend may not be able to steer their AI products away “from the most egregious examples of bias, let alone the pervasive but slightly subtler cases.” The spokesperson says Google’s efforts are governed by its AI principles, a set of guidelines announced in 2018 for developing the technology responsibly, adding that the company is still taking a cautious approach.

Other outfits have already shown they’re willing to push ahead, whether Google does or not. One of the most important contributions Google’s researchers have made to the field was a landmark paper titled “Attention Is All You Need,” in which the authors introduced transformers: systems that help AI models zero in on the most important pieces of information in the data they’re analyzing. Transformers are now key building blocks for large language models, the tech powering the current crop of chatbots—the “T” in ChatGPT stands for “transformer.” Five years after the paper’s publication, all but one of the authors have left Google, with some citing a desire to break free of the strictures of a large, slow-moving company.

They are among dozens of AI researchers who’ve jumped to OpenAI as well as a host of smaller startups, including Character.AI, Anthropic and Adept. A handful of startups founded by Google alumni—including Neeva, Perplexity AI, Tonita and Vectara—are seeking to reimagine search using large language models. The fact that only a few key places have the knowledge and ability to build them makes the competition for that talent “much more intense than in other fields where the ways of training models are not as specialized,” says Sara Hooker, a Google Brain alumna now working at AI startup Cohere.

It’s not unheard of for people or organizations to contribute significantly to the development of one breakthrough technology or another, only to see someone else realize stupefying financial gains without them. Keval Desai, a former Googler who’s now managing director of venture capital firm Shakti, cites the example of Xerox Parc, the research lab that laid the groundwork for much of the personal computing era, only to see Apple Inc. and Microsoft come along and build their trillion-dollar empires on its back. “Google wants to make sure that it’s not the Xerox Parc of its era,” says Desai. “All the innovation happened there, but none of the execution.”

© 2023 Bloomberg LP


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version