Alphabet CEO Sundar Pichai Reaps $226 Million Compensation in 2022 Amid Layoffs

Alphabet Chief Executive Sundar Pichai received total compensation of about $226 million (roughly Rs. 1,850 crore) in 2022, more than 800 times the median employee’s pay, the company said in a securities filing on Friday.

Pichai’s compensation included stock awards of about $218 million (roughly Rs. 1,800 crore), the filing showed.

The pay disparity comes at a time when Alphabet, the parent company of Google, has been cutting jobs globally, The Mountain View, California-based company announced plans to cut 12,000 jobs around the world in January, equivalent to 6 percent of its global workforce.

Early this month, hundreds of Google employees staged a walkout at the company’s London offices following a dispute over layoffs.

In March, Google employees staged a walkout at the company’s Zurich offices after more than 200 workers were laid off.

Meanwhile, the company is working rapidly towards making its chatbot Bard stand out among the competitors. On Friday, Google announced that Bard, its generative artificial intelligence (AI) chatbot, to help people write code to develop software, as the tech giant plays catch-up in a fast-moving race on AI technology.

Bard will be able to code in 20 programming languages including Java, C++ and Python, and can also help debug and explain code to users, Google said on Friday.

The company said Bard can also optimise code to make it faster or more efficient with simple prompts such as “Could you make that code faster?”.

Currently, Bard can be accessed by a small set of users who can chat with the bot and ask questions instead of running Google’s traditional search tool.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Bard Now Helps Write Software Codes in 20 Programming Languages

Alphabet’s Google said on Friday it will update Bard, its generative artificial intelligence (AI) chatbot, to help people write code to develop software, as the tech giant plays catch-up in a fast-moving race on AI technology.

Last month, the company started the public release of Bard to gain ground on Microsoft.

The release of ChatGPT, a chatbot from the Microsoft-backed startup OpenAI, last year caused a sprint in the technology sector to put AI into more users’ hands.

Google describes Bard as an experiment allowing collaboration with generative AI, technology that relies on past data to create rather than identify content.

Bard will be able to code in 20 programming languages including Java, C++ and Python, and can also help debug and explain code to users, Google said on Friday.

The company said Bard can also optimise code to make it faster or more efficient with simple prompts such as “Could you make that code faster?”.

Currently, Bard can be accessed by a small set of users who can chat with the bot and ask questions instead of running Google’s traditional search tool.

The company began the public release of its chatbot Bard in late March this year, seeking users and feedback to gain ground on Microsoft in a fast-moving race on artificial intelligence technology. Bard could show three different versions or “drafts” of any given answer among which users could toggle, and it displayed a button stating “Google it,” should a user desire web results for a query.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google’s Rush to Take Its AI Chatbot Bard Public Led to Ethical Lapses, Employees Say

Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.

One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology’s potential harms. But the November 2022 debut of rival OpenAI’s popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.

That was a markedly faster pace of development for the technology, and one that could have profound societal impact. The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said. The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development, they said.

Google is aiming to revitalize its maturing search business around the cutting-edge technology, which could put generative AI into millions of phones and homes around the world — ideally before OpenAI, with the backing of Microsoft, beats the company to it.

“AI ethics has taken a back seat,” said Meredith Whittaker, president of the Signal Foundation, which supports private messaging, and a former Google manager. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

In response to questions from Bloomberg, Google said responsible AI remains a top priority at the company. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said Brian Gabriel, a spokesperson. The team working on responsible AI shed at least three members in a January round of layoffs at the company, including the head of governance and programs. The cuts affected about 12,000 workers at Google and its parent company.

Google, which over the years spearheaded much of the research underpinning today’s AI advancements, had not yet integrated a consumer-friendly version of generative AI into its products by the time ChatGPT launched. The company was cautious of its power and the ethical considerations that would go hand-in-hand with embedding the technology into search and other marquee products, the employees said.

By December, senior leadership decreed a competitive “code red” and changed its appetite for risk. Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said. Still, it needed to get its ethics teams on board. That month, the AI governance lead, Jen Gennai, convened a meeting of the responsible innovation group, which is charged with upholding the company’s AI principles.

Gennai suggested that some compromises might be necessary in order to pick up the pace of product releases. The company assigns scores to its products in several important categories, meant to measure their readiness for release to the public. In some, like child safety, engineers still need to clear the 100 percent threshold. But Google may not have time to wait for perfection in other areas, she advised in the meeting. “‘Fairness’ may not be, we have to get to 99 percent,” Gennai said, referring to its term for reducing bias in products. “On ‘fairness,’ we might be at 80, 85 percent, or something” to be enough for a product launch, she added.

In February, one employee raised issues in an internal message group: “Bard is worse than useless: please do not launch.” The note was viewed by nearly 7,000 people, many of whom agreed that the AI tool’s answers were contradictory or even egregiously wrong on simple factual queries.

The next month, Gennai overruled a risk evaluation submitted by members of her team stating Bard was not ready because it could cause harm, according to people familiar with the matter. Shortly after, Bard was opened up to the public — with the company calling it an “experiment”.

In a statement, Gennai said it wasn’t solely her decision. After the team’s evaluation she said she “added to the list of potential risks from the reviewers and escalated the resulting analysis” to a group of senior leaders in product, research and business. That group then “determined it was appropriate to move forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” she said.

Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety. Researchers building AI outnumber those focused on safety by a 30-to-1 ratio, the Center for Humane Technology said at a recent presentation, underscoring the often lonely experience of voicing concerns in a large organization.

As progress in artificial intelligence accelerates, new concerns about its societal effects have emerged. Large language models, the technologies that underpin ChatGPT and Bard, ingest enormous volumes of digital text from news articles, social media posts and other internet sources, and then use that written material to train software that predicts and generates content on its own when given a prompt or query. That means that by their very nature, the products risk regurgitating offensive, harmful or inaccurate speech.

But ChatGPT’s remarkable debut meant that by early this year, there was no turning back. In February, Google began a blitz of generative AI product announcements, touting chatbot Bard, and then the company’s video service YouTube, which said creators would soon be able to virtually swap outfits in videos or create “fantastical film settings” using generative AI. Two weeks later, Google announced new AI features for Google Cloud, showing how users of Docs and Slides will be able to, for instance, create presentations and sales-training documents, or draft emails. On the same day, the company announced that it would be weaving generative AI into its health-care offerings. Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.

The challenge of developing cutting-edge artificial intelligence in an ethical manner has long spurred internal debate. The company has faced high-profile blunders over the past few years, including an embarrassing incident in 2015 when its Photos service mistakenly labeled images of a Black software developer and his friend as “gorillas.”

Three years later, the company said it did not fix the underlying AI technology, but instead erased all results for the search terms “gorilla,” “chimp,” and “monkey,” a solution that it says “a diverse group of experts” weighed in on. The company also built up an ethical AI unit tasked with carrying out proactive work to make AI fairer for its users.

But a significant turning point, according to more than a dozen current and former employees, was the ousting of AI researchers Timnit Gebru and Margaret Mitchell, who co-led Google’s ethical AI team until they were pushed out in December 2020 and February 2021 over a dispute regarding fairness in the company’s AI research. Samy Bengio, a computer scientist who oversaw Gebru and Mitchell’s work, and several other researchers would end up leaving for competitors in the intervening years.

After the scandal, Google tried to improve its public reputation. The responsible AI team was reorganized under Marian Croak, then a vice president of engineering. She pledged to double the size of the AI ethics team and strengthen the group’s ties with the rest of the company.

Even after the public pronouncements, some found it difficult to work on ethical AI at Google. One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review. Managers protested that it was getting in the way of their “real work,” the person said.

Those who remained working on ethical AI at Google were left questioning how to do the work without putting their own jobs at risk. “It was a scary time,” said Nyalleng Moorosi, a former researcher at the company who is now a senior researcher at the Distributed AI Research Institute, founded by Gebru. Doing ethical AI work means “you were literally hired to say, I don’t think this is population-ready,” she added. “And so you are slowing down the process.”

To this day, AI ethics reviews of products and features, two employees said, are almost entirely voluntary at the company, with the exception of research papers and the review process conducted by Google Cloud on customer deals and products for release. AI research in delicate areas like biometrics, identity features, or kids are given a mandatory “sensitive topics” review by Gennai’s team, but other projects do not necessarily receive ethics reviews, though some employees reach out to the ethical AI team even when not required.

Still, when employees on Google’s product and engineering teams look for a reason the company has been slow to market on AI, the public commitment to ethics tends to come up. Some in the company believed new tech should be in the hands of the public as soon as possible, in order to make it better faster with feedback.

Before the code red, it could be hard for Google engineers to get their hands on the company’s most advanced AI models at all, another former employee said. Engineers would often start brainstorming by playing around with other companies’ generative AI models to explore the possibilities of the technology before figuring out a way to make it happen within the bureaucracy, the former employee said.

“I definitely see some positive changes coming out of ‘code red’ and OpenAI pushing Google’s buttons,” said Gaurav Nemade, a former Google product manager who worked on its chatbot efforts until 2020. “Can they actually be the leaders and challenge OpenAI at their own game?” Recent developments — like Samsung reportedly considering replacing Google with Microsoft’s Bing, whose tech is powered by ChatGPT, as the search engine on its devices — have underscored the first-mover advantage in the market right now.

Some at the company said they believe that Google has conducted sufficient safety checks with its new generative AI products, and that Bard is safer than competing chatbots. But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.

Teams working on the new AI features have been siloed, making it hard for rank-and-file Googlers to see the full picture of what the company is working on. Company mailing lists and internal channels that were once places where employees could openly voice their doubts have been curtailed with community guidelines under the pretext of reducing toxicity; several employees said they viewed the restrictions as a way of policing speech.

“There is a great amount of frustration, a great amount of this sense of like, what are we even doing?” Mitchell said. “Even if there aren’t firm directives at Google to stop doing ethical work, the atmosphere is one where people who are doing the kind of work feel really unsupported and ultimately will probably do less good work because of it.”

When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings — a stance that has been critiqued by some in the field as a form of marketing — rather than the day-to-day scenarios that already have the potential to be harmful.

El-Mahdi El-Mhamdi, a former research scientist at Google, said he left the company in February over its refusal to engage with ethical AI issues head-on. Late last year, he said, he co-authored a paper that showed it was mathematically impossible for foundational AI models to be large, robust and remain privacy-preserving.

He said the company raised questions about his participation in the research while using his corporate affiliation. Rather than go through the process of defending his work, he said he volunteered to drop the affiliation with Google and use his academic credentials instead.

“If you want to stay on at Google, you have to serve the system and not contradict it,” El-Mhamdi said.

© 2023 Bloomberg LP


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft’s Bing Plans AI Ads, Testing Them in Early Version of Chatbot

Microsoft has started discussing with ad agencies how it plans to make money from its revamped Bing search engine powered by generative artificial intelligence as the tech company seeks to battle Google’s dominance.

In a meeting with a major ad agency this week, Microsoft showed off a demo of the new Bing and said it plans to allow paid links within responses to search results, said an ad executive, who spoke about the private meeting on the condition of anonymity.

Generative AI, which can produce original answers in a human voice in response to open-ended questions or requests, has recently captivated the world. Last week, Microsoft and Alphabet‘s Google announced new generative AI chatbots a day apart from the other. Those bots, which have not yet rolled out widely to users, will be able to synthesize material on the web for complex search queries.

Early search results and conversations with Microsoft’s Bing and Google’s chatbot called Bard have shown they can be unpredictable. Alphabet lost $100 billion (nearly Rs. 8,27,500 crore) in market value on the day when it released a promotional video for Bard that showed the chatbot sharing inaccurate information.

Microsoft expects the more human responses from the Bing AI chatbot will generate more users for its search function and therefore more advertisers. Advertisements within the Bing chatbot may also enjoy more prominence on the page compared to traditional search ads.

Microsoft is already testing ads in its early version of the Bing chatbot, which is available to a limited number of users, according to the ad executive and ads seen by Reuters this week.

The company said it is taking traditional search ads, in which brands pay to have their websites or products appear on search results for keywords related to their business, and inserting them into responses generated by the Bing chatbot, the ad executive said.

Microsoft declined to comment on the specifics of its plans.

Microsoft is also planning another ad format within the chatbot that will be geared toward advertisers in specific industries. For example, when a user asks the new AI-powered Bing “what are the best hotels in Mexico?”, hotel ads could pop up, according to the ad executive.

Integrating ads into the Bing chatbot, which can be expanded to fill the top of the search page, could help ensure that ads are not pushed further down the page below the chatbot.

Omnicom, a major ad group that works with brands like AT&T and Unilever, has told clients that search ads could generate lower revenue in the short term if the chatbots take up the top of search pages without including any ads, according to a note to clients last week, which was reviewed by Reuters.

The new Bing, which has a waitlist of millions of people for access, is a potentially lucrative opportunity for Microsoft. The company said during an investor and press presentation last week that every percentage point of market share it gains in the search advertising market could bring in another $2 billion (nearly Rs. 16,550 crore) of ad revenue.

Microsoft’s Edge web browser, which uses the Bing search engine, has a market share under 5 percent worldwide, according to one estimate from web analytics firm StatCounter.

Michael Cohen, executive vice president of performance media at media agency Horizon Media, who received a demo of Bing during a separate meeting with Microsoft representatives, said the company indicated that links at the bottom of Bing’s AI-generated search responses could be places for ads.

“They seem intent on starting off immediately with paid ads integrated,” Cohen said, adding that Microsoft said more information about the strategy could come in early March.

This week, when a Reuters reporter asked the new version of Bing outfitted with AI for the price of car air filters, Bing included advertisements for filters sold by auto parts website Parts Geek.

Parts Geek did not immediately respond to questions about whether it was aware of its ads appearing in the new Bing chatbot.

Microsoft, when asked about the Parts Geek ads, said the potential of the new AI technology in advertising is only beginning to be explored and it aims to work with its partners and the ad industry.

Despite the early tests, Microsoft has not provided a timeline for when brands will be able to directly purchase ads within the chatbot, Cohen and the ad executive said.

In the long term, conversational AI is likely to become the dominant way consumers search on the internet, Omnicom said in its letter to clients.

“It is not an exaggeration to say that (Microsoft and Google’s) announcements signal the biggest change to search in 20 years,” Omnicom said.

© Thomson Reuters 2023


The OnePlus 11 5G was launched at the company’s Cloud 11 launch event which also saw the debut of several other devices. We discuss this new handset and all of OnePlus’ new hardware on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Warns Against Pitfalls of AI in Chatbots Amid Development of Bard Against ChatGPT: Report

The boss of Google’s search engine warned against the pitfalls of artificial intelligence in chatbots in a newspaper interview published on Saturday, as Google parent company Alphabet battles to compete with blockbuster app ChatGPT.

“This kind of artificial intelligence we’re talking about right now can sometimes lead to something we call hallucination,” Prabhakar Raghavan, senior vice president at Google and head of Google Search, told Germany’s Welt am Sonntag newspaper.

“This then expresses itself in such a way that a machine provides a convincing but completely made-up answer,” Raghavan said in comments published in German. One of the fundamental tasks, he added, was keeping this to a minimum.

Google has been on the back foot after OpenAI, a startup Microsoft is backing with around $10 billion (roughly Rs. 82,500 crore), in November introduced ChatGPT, which has since wowed users with its strikingly human-like responses to user queries.

Alphabet introduced Bard, its own chatbot, earlier this week, but the software shared inaccurate information in a promotional video in a gaffe that cost the company $100 billion (roughly Rs. 82,50,000 crore) in market value on Wednesday.

Alphabet, which is still conducting user testing on Bard, has not yet indicated when the app could go public.

“We obviously feel the urgency, but we also feel the great responsibility,” Raghavan said. “We certainly don’t want to mislead the public.”

Recently, Microsoft has announced a multimillion-dollar partnership with ChatGPT maker OpenAI to unveil new products. Google, on the other hand, is working to develop Bard while also investing heavily in other AI startups.

The services that Google’s Bard and ChatGPT would offer are similar. Users will have to key in a question, a request, or give a prompt to receive a human-like response. Microsoft and Google plan to embed AI tools to bolster their search services Bing and Google Search, which account for a big chunk of revenue.

© Thomson Reuters 2022


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google AI Chatbot Bard Caught Providing Inaccurate Information in Company Ad

Google published an online advertisement in which its much-anticipated AI chatbot Bard delivered an inaccurate answer.

The tech giant posted a short GIF video of Bard in action via Twitter, describing the chatbot as a “launchpad for curiosity” that would help simplify complex topics.

In the advertisement, Bard is given the prompt: “What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year old about?”

Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth‘s solar system, or exoplanets. This is inaccurate.

The first pictures of exoplanets were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004, as confirmed by NASA.

The error was spotted hours before Google hosted a launch event for Bard in Paris, where senior executive Prabhakar Raghavan promised that users would use the technology to interact with information in “entirely new ways”.

Raghavan presented Bard on Wednesday as the future of the company, telling audience members that by using generative AI, “the only limit to search will be your imagination”.

Google’s launch event came one day after Microsoft unveiled plans to integrate its rival AI chatbot ChatGPT into its Bing search engine and other products.

Google did not immediately respond to a request for comment. It announced the launch of Bard on Monday.

Bard will seek knowledge based on the responses provided by the users, as well as the information available on web. The company is initially rolling out the AI system for testers along with lightweight model version of LaMDA.

© Thomson Reuters 2023

 


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google to Launch ChatGPT Rival Bard, Releases AI Service to Early Testers

Google-parent Alphabet is planning to launch a chatbot service and more artificial intelligence for its search engine as well as developers, marking a riposte to Microsoft in a rivalry to lead a new wave of technology.

In a blog post on Monday, Alphabet CEO Sundar Pichai said the company is opening a conversational AI service called Bard to test users for feedback, followed by a public release in the coming weeks.

As per the blog, experimental conversational AI service Bard is being powered by LaMDA (Language Model for Dialogue Applications), which was unveiled by Google two years ago. The CEO also added about the abilities of Bard, which will be combination of “power, intelligence and creativity of the company’s large language models.”

Bard will seek knowledge based on the responses provided by the users, as well the information available on web. The company is initially rolling out the AI system for testers along with lightweight model version of LaMDA. The focus for now is diverted towards collecting feedback to make the AI system better for future application. 

Bard from Google is Alphabet’s competition to OpenAI’s ChatGPT, headed by Microsoft. ChatGPT has been in news for becoming the fastest-growing consumer application in history, beating TikTok and Instagram. ChatGPT is estimated to have reached 100 million monthly active users in January, just two months after launch.

ChatGPT can generate articles, essays, jokes and even poetry in response to prompts. OpenAI, a private company backed by Microsoft, made it available to the public for free in late November.

Apart from Bard, Google is also focused on supporting other reliable AI systems through the Google Cloud partnerships. These AI systems include CohereC3.ai and Anthropic. It was recently reported that Google has invested almost $400 million (roughly Rs. 3,299 crore) in Anthropic. 


Samsung’s Galaxy S23 series of smartphones was launched earlier this week and the South Korean firm’s high-end handsets have seen a few upgrades across all three models. What about the increase in pricing? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version