Microsoft’s Salaried Staff Won’t Get Raise This Year, Reveals Leaked CEO Mail: Report

Microsoft will not raise salaries for full-time employees this year and is reducing budget for bonuses and stock awards, Insider reported on Wednesday, citing an internal email by CEO Satya Nadella.

The tech giant did not immediately respond to a Reuters request for comment.

“Last year, we made a significant investment in compensation driven by market conditions and company performance, nearly doubling our global merit budget…this year the economic conditions are very different across many dimensions” the report quoted Nadella saying.

In January, Microsoft said it would let go of 10,000 workers, adding to the tens of thousands layoffs announced before that across the technology sector as it deals with slowing growth in a turbulent economy.

Microsoft has now squarely placed its focus on generative AI, an area the industry sees as a bright spot.

In collaboration with ChatGPT maker OpenAI, which also has received billions of dollars in funding from Microsoft, the tech giant has been infusing the AI tech into its Office products as well as search engine Bing.

Last week, Microsoft expanded public access to its generative artificial intelligence programs, despite fears that tech firms are rushing ahead too quickly with potentially dangerous technology.

In March this year, it was reported that Microsoft-owned GitHub laid off 142 people in India, including the entire staff in its engineering division. Those affected by the decision were deployed across the company’s offices in Bengaluru, Hyderabad, and Delhi. A GitHub spokesperson termed the decision as a part of the company’s reorganisation plan.


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google, Microsoft, OpenAI CEOs Meet US President Biden at White House, Discuss AI Risks

President Joe Biden attended a White House meeting with CEOs of top artificial intelligence companies, including Alphabet‘s Google and Microsoft, on Thursday to discuss risks and safeguards as the technology catches the attention of governments and lawmakers globally.

Generative artificial intelligence has become a buzzword this year, with apps such as ChatGPT capturing the public’s fancy, sparking a rush among companies to launch similar products they believe will change the nature of work.

Millions of users have begun testing such tools, which supporters say can make medical diagnoses, write screenplays, create legal briefs and debug software, leading to growing concern about how the technology could lead to privacy violations, skew employment decisions, and power scams and misinformation campaigns.

Biden, who “dropped by” the meeting, has also used ChatGPT, a White House official told Reuters. “He’s been extensively briefed on ChatGPT and (has) experimented with it,” said the official, who asked that they not be named.

Thursday’s two-hour meeting which began at 11:45 am ET (09:15pm. IST), includes Google’s Sundar Pichai, Microsoft’s Satya Nadella, OpenAI‘s Sam Altman and Anthropic‘s Dario Amodei, along with Vice President Kamala Harris and administration officials including Biden’s Chief of Staff Jeff Zients, National Security Adviser Jake Sullivan, Director of the National Economic Council Lael Brainard and Secretary of Commerce Gina Raimondo.

Harris said in a statement the technology has the potential to improve lives but could pose safety, privacy and civil rights concerns. She told the chief executives they have a “legal responsibility” to ensure the safety of their artificial intelligence products and that the administration is open to advancing new regulations and supporting new legislation on artificial intelligence.

Ahead of the meeting, OpenAI’s Altman told reporters the White House wants to “get it right.”

“It’s good to try to get ahead of this,” he said when asked if the White House was moving quickly enough on AI regulation. “It’s definitely going to be a challenge, but it’s one I’m sure we can handle.”

The administration also announced a $140 million (nearly Rs. 1,150 crore) investment from the National Science Foundation to launch seven new AI research institutes and said the White House’s Office of Management and Budget would release policy guidance on the use of AI by the federal government.    Leading AI developers, including Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability AI, will participate in a public evaluation of their AI systems.

Shortly after Biden announced his reelection bid, the Republican National Committee produced a video featuring a dystopian future during a second Biden term, which was built entirely with AI imagery.

Such political ads are expected to become more common as AI technology proliferates.

United States regulators have fallen short of the tough approach European governments have taken on tech regulation and in crafting strong rules on deepfakes and misinformation.

“We don’t see this as a race,” the senior official said, adding that the administration is working closely with the US-EU Trade & Technology Council on the issue. 

In February, Biden signed an executive order directing federal agencies to eliminate bias in their AI use. The Biden administration has also released an AI Bill of Rights and a risk management framework.

Last week, the Federal Trade Commission and the Department of Justice’s Civil Rights Division also said they would use their legal authorities to fight AI-related harm.

Tech giants have vowed many times to combat propaganda around elections, fake news about the COVID-19 vaccines, pornography and child exploitation, and hateful messaging targeting ethnic groups. But they have been unsuccessful, research and news events show.

© Thomson Reuters 2023  
 


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

EU Lawmakers Struggle to Finalise Law to Regulate ChatGPT and Generative AI

As recently as February, generative AI did not feature prominently in EU lawmakers’ plans for regulating generative artificial intelligence (AI) technologies such as ChatGPT.

The bloc’s 108-page proposal for the AI Act, published two years earlier, included only one mention of the word “chatbot.” References to AI-generated content largely referred to deepfakes: images or audio designed to impersonate human beings.

By mid-April, however, members of European Parliament (MEPs) were racing to update those rules to catch up with an explosion of interest in generative AI, which has provoked awe and anxiety since OpenAI unveiled ChatGPT six months ago.

That scramble culminated on Thursday with a new draft of the legislation which identified copyright protection as a core piece of the effort to keep AI in check.

Interviews with four lawmakers and two other sources close to discussions reveal for the first time how over just 11 days this small group of politicians hammered out what could become landmark legislation, reshaping the regulatory landscape for OpenAI and its competitors.

The draft bill is not final and lawyers say it will likely take years to come into force.

The speed of their work, though, is also a rare example of consensus in Brussels, which is often criticised for the slow pace of decision-making.

Last-minute changes

Since launching in November, ChatGPT has become the fastest growing app in history, and sparked a flurry of activity from Big Tech competitors and investment in generative AI startups like Anthropic and Midjourney.

The runaway popularity of such applications led EU industry chief Thierry Breton and others to call for regulation of ChatGPT-like services.

An organisation backed by Elon Musk, the billionaire CEO of Tesla and Twitter, took it up a notch by issuing a letter warning of existential risk from AI and calling for stricter regulations.

On April 17, the dozen MEPs involved in drafting the legislation signed an open letter agreeing with some parts of Musk’s letter and urged world leaders to hold a summit to find ways to control the development of advanced AI.

That same day, however, two of them — Dragos Tudorache and Brando Benifei — proposed changes that would force companies with generative AI systems to disclose any copyrighted material used to train their models, according to four sources present at the meetings, who requested anonymity due to the sensitivity of the discussions.

That tough new proposal received cross-party support, the sources said.

One proposal by conservative MEP Axel Voss — forcing companies to request permission from rights holders before using the data — was rejected as too restrictive and something that could hobble the emerging industry.  

After thrashing out the details over the next week, the EU outlined proposed laws that could force an uncomfortable level of transparency on a notoriously secretive industry.

“I must admit that I was positively surprised on how we converged rather easily on what should be in the text on these models,” Tudorache told Reuters on Friday.

“It shows there is a strong consensus, and a shared understanding on how to regulate at this point in time.”

The committee will vote on the deal on May 11 and if successful, it will advance to the next stage of negotiation, the trilogue, where EU member states will debate the contents with the European Commission and Parliament.

“We are waiting to see if the deal holds until then,” one source familiar with the matter said.

Big Brother vs the Terminator

Until recently, MEPs were still unconvinced that generative AI deserved any special consideration.

In February, Tudorache told Reuters that generative AI was “not going to be covered” in-depth. “That’s another discussion I don’t think we are going to deal with in this text,” he said.

Citing data security risks over warnings of human-like intelligence, he said: “I am more afraid of Big Brother than I am of the Terminator.”

But Tudorache and his colleagues now agree on the need for laws specifically targeting the use of generative AI.

Under new proposals targeting “foundation models,” companies like OpenAI, which is backed by Microsoft, would have to disclose any copyrighted material — books, photographs, videos and more — used to train their systems.

Claims of copyright infringement have rankled AI firms in recent months with Getty Images suing Stable Diffusion for using copyrighted photos to train its systems. OpenAI has also faced criticism for refusing to share details of the dataset used to train its software.

“There have been calls from outside and inside the Parliament for a ban or classifying ChatGPT as high-risk,” said MEP Svenja Hahn. “The final compromise is innovation-friendly as it does not classify these models as ‘high risk,’ but sets requirements for transparency and quality.”

© Thomson Reuters 2023


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT Restored in Italy After Microsoft-Backed OpenAI Responds to Regulator

The ChatGPT chatbot was reactivated in Italy after its maker OpenAI addressed issues raised by Italy’s data protection authority, the agency and the company confirmed on Friday.

Microsoft-backed OpenAI took ChatGPT offline in Italy last month after the country’s data protection authority, also known as Garante, temporarily banned the chatbot and launched a probe over the artificial intelligence application’s suspected breach of privacy rules.

Garante had given a deadline till Sunday to OpenAI to address its concerns for allowing the chatbot to start operating again in the country.

Last month, Garante said ChatGPT has an “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot. 

Garante had also accused OpenAI of failing to check the age of ChatGPT’s users who are supposed to be aged 13 or above, OpenAI said it will offer a tool to verify users’ ages in Italy upon sign-up.

The company said on Friday it will provide greater visibility of its privacy policy and user content opt-out form.

It will also provide a new form for European Union users to exercise their right to object to its use of personal data to train its models, a company spokesperson said.

The form requires people who want to opt out to provide detailed personal information, including evidence of data processing via relevant prompts.

Garante said it recognises the steps taken to combine technological progress with respect to people’s rights and hopes that the company will continue along this path of compliance with European data protection regulations.

Italy was the first western European country to curb ChatGPT, but its rapid development has attracted attention from lawmakers and regulators in several countries.

A committee of European Union lawmakers on Thursday agreed on new rules that would force companies deploying generative AI tools, such as ChatGPT, to disclose any copyrighted material used to develop their systems.

Following Garante’s interest in ChatGPT, European Data Protection Board, the body that unites Europe’s national privacy watchdogs, set up a task force on the chatbot earlier this month.

Garante said it will continue its probe of ChatGPT and will work with the special task force.  

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft, Alphabet, Other AI Companies Urged to Prioritize Security Measures for New Technologies

The chair of the Senate Intelligence Committee on Wednesday urged CEOs of several artificial intelligence (AI) companies to prioritize security measures, combat bias, and responsibly roll out new technologies.

Democratic Senator Mark Warner raised concerns about potential risks posed by AI technology. “Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field,” said Warner, who sent letters to the CEOs of OpenAI, Scale AI, Meta PlatformsAlphabet‘s Google, Apple, Stability AI, Midjourney, Anthropic, Percipient.ai, and Microsoft.

“With the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work,” Warner said.

Earlier this month, Senate Majority Leader Chuck Schumer said he had launched an effort to establish AI rules and address national security and education concerns, as use of programs like ChatGPT becomes widespread.

Schumer, a Democrat, said in a statement he had drafted and circulated a “framework that outlines a new regulatory regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the US advances and leads in this transformative technology.”

ChatGPT, an AI program that recently grabbed the public’s attention for its ability to write answers quickly to a wide range of queries, in particular has attracted US lawmakers’ attention. It has become the fastest-growing consumer application in history with more than 100 million monthly active users.

Microsoft is a big investor in OpenAI, which created ChatGPT. The software company and Google have been pouring billions of dollars into AI to gain an edge amid heightened competition in Silicon Valley.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Big Tech Investors to Scrutinise Profits After Industry-Wide Layoffs, Firms to Highlight AI as Growth Driver

A quarter into record layoffs, investors in US tech giants will scrutinize if the cost cuts boosted profits to their satisfaction, while the companies emphasize how artificial intelligence will be their next growth driver.

Microsoft, Google parent Alphabet, Instagram owner Meta Platforms, and Amazon.com all report quarterly results in this week.

Together, they command more than $5 trillion in market capitalisation, or more than 14 percent of the value of the S&P 500 index.

Between Microsoft, Alphabet, and Meta, analysts expect profits to rise 4.5 percent, on average, from the immediately preceding quarter, led by an 11.8 percent jump in Meta’s bottom line, according to Refinitiv. From a year earlier, profit is expected to slump nearly 16 percent, on average, with Microsoft expected to perform the least poorly with a 0.5 percent slip.

These three companies, along with Amazon, said between November and March they would slash 70,000 jobs in a rapidly weakening economy, following a pandemic-led hiring boom. Meta has announced two rounds of layoffs.

Amazon.com, which reported a big drop in fourth-quarter profit due to valuation losses because of its investment in money-losing EV maker Rivian Automotive, is set to post a first-quarter profit that is expected to increase eight times, when compared with the immediately previous quarter.

According to research firm YipitData, Amazon’s North America sales are set to beat Wall Street estimates in the first quarter.

GRAPHIC : Big Tech stocks since last six months – https://www.reuters.com/graphics/BIGTECH-STOCK/zgvobzmoqpd/Pasted%20image%201682082335284.png

The companies are likely to give updates on their AI efforts, a trend noticeable since last quarter when chief executives packed earnings calls with mentions of the technology.

“If last quarter’s message from Big Tech was all about efficiency and bottom line improvement, this quarter’s message is likely to be more forward-looking around the massive potential of artificial intelligence,” Andrew Lipsman, an analyst at Insider Intelligence, said.

Microsoft has integrated OpenAI’s ChatGPT technology into its search engine Bing, pitting it against market leader Google.

Google has begun the public release of its chatbot Bard.

Amazon’s cloud division AWS, the world’s largest, has released a suite of technologies aimed at helping other companies develop their own chatbots backed by AI, and Meta has published an AI model that can pick out individual objects from within an image.

“It’s sort of a double-edged sword because there is also pressure for these companies to improve cash flow in an economy that is decelerating,” Itau BBA analyst Thiago Kapulskis said.

“There are expectations that companies could create or do even more with AI … every tech investor is expecting those companies to be in the frontier.”

The cloud businesses of Amazon, Google, and Microsoft were also more stable than expected, analysts said.

Microsoft and Alphabet stocks have both risen 19 percent so far this year. Apple and Amazon are up 28 percent and 23 percent, respectively. Meta shares have gained nearly 77 percent.

GRAPHIC : Big Tech stocks since last six months – https://www.reuters.com/graphics/BIGTECH-STOCK/zgvobzmoqpd/Pasted%20image%201682082335284.png

The largest company in the world, Apple, which is scheduled to report earnings on May 4, is dealing with slowing demand for iPhones and MacBooks as consumers curb spending. 

© Thomson Reuters 2023 


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Bard Now Helps Write Software Codes in 20 Programming Languages

Alphabet’s Google said on Friday it will update Bard, its generative artificial intelligence (AI) chatbot, to help people write code to develop software, as the tech giant plays catch-up in a fast-moving race on AI technology.

Last month, the company started the public release of Bard to gain ground on Microsoft.

The release of ChatGPT, a chatbot from the Microsoft-backed startup OpenAI, last year caused a sprint in the technology sector to put AI into more users’ hands.

Google describes Bard as an experiment allowing collaboration with generative AI, technology that relies on past data to create rather than identify content.

Bard will be able to code in 20 programming languages including Java, C++ and Python, and can also help debug and explain code to users, Google said on Friday.

The company said Bard can also optimise code to make it faster or more efficient with simple prompts such as “Could you make that code faster?”.

Currently, Bard can be accessed by a small set of users who can chat with the bot and ask questions instead of running Google’s traditional search tool.

The company began the public release of its chatbot Bard in late March this year, seeking users and feedback to gain ground on Microsoft in a fast-moving race on artificial intelligence technology. Bard could show three different versions or “drafts” of any given answer among which users could toggle, and it displayed a button stating “Google it,” should a user desire web results for a query.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

SAP Plans to Use OpenAI’s Chatbot ChatGPT Amid Growth in Quarterly Revenue

Business software maker SAP on Friday reported first-quarter revenue above analysts’ expectations, backed by growth in its cloud business but lowered its outlook for the year due to the divestment of its Qualtrics unit.

SAP, which in January announced plans to cut 3,000 jobs as it looked to cut costs, foresees no more restructuring this year and plans to use artificial intelligence technologies like generative AI in its products.

While tougher economic conditions have riled big technology companies, SAP has still been able to grow its revenue by 10 percent in the first quarter to EUR 7.44 billion (roughly Rs. 60,700 crore), beating a company-provided consensus.

It said it was working with Microsoft-backed OpenAI‘s chatbot ChatGPT that can provide human-like responses to questions.

We were studying ChatGPT for quite a while… we have built over 50 AI use cases, embedding them with our technology,” CEO Christian Klein said in an interview. Those use cases will be available to customers next month after its annual Sapphire conference, he said.

SAP also has an internal committee with customers, researchers and analysts to check for biases in AI use cases and guard against potential misuse of the technology, Klein said.

Revenue from SAP’s lucrative cloud business grew 24 percent year-on-year, broadly in line with consensus. SAP has already discounted subsidiary Qualtrics’ profits, which it divested last month, from the current earnings report.

For the year, SAP expects non-IFRS operating profit in the range of EUR 8.6 million – EUR 8.9 billion (roughly Rs. 70 crore to Rs. 73 crore), EUR 200 million (roughly Rs. 1,600 crore) less than before. Cloud revenue forecast is seen down by EUR 1.3 billion (roughly Rs. 10,700 crore) to between EUR 14 and EUR 14.4 billion (roughly Rs. 1,14,900 crore to Rs. 1,18,100 crore).

“Underlying guidance is essentially unchanged, although updated to reflect the disposal of Qualtrics,” Jefferies analysts wrote in a client note.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Alphabet to Consolidate Google Brain, DeepMind AI Research Units in Race to Keep Up With Rival ChatGPT

Alphabet is combining Google Brain and DeepMind, as it doubles down on artificial intelligence research in its race to compete with rival systems like OpenAI’s ChatGPT chatbot.

The new division will be led by DeepMind CEO Demis Hassabis and its setting up will ensure “bold and responsible development of general AI”, Alphabet CEO Sundar Pichai said in a blog post on Thursday.

Alphabet said the teams that are being combined have delivered a number of high-profile projects including the transformer, technology that formed the bedrock of some of OpenAI’s own work.

Going forward, the Alphabet staff will work on “multimodal” AI, like OpenAI’s latest model GPT-4, which can respond not only to text prompts but to image inputs as well to generate new content. Google has for decades dominated the search market, with a share of over 80 percent, but Wall Street fears that the Alphabet unit could fall behind Microsoft Corp in the fast-moving AI race. Technology from OpenAI, funded by Microsoft, powers the rival software maker’s updated Bing search engine.

Alphabet announced the launch of Bard in February to take on ChatGPT as well. It lost $100 billion in value on Feb. 8 after Bard shared inaccurate information in a promotional video and a company event failed to dazzle.

Alphabet shares were up 2 percent on Thursday. Earlier this week, it was reported that Alphabet shares fell over 4 percent in premarket trading after a report that said South Korea’s Samsung Electronics was considering replacing Google with Microsoft-owned Bing as the default search engine on its devices.

The report, published by the New York Times over the weekend, underscored the growing challenges Google’s $162-billion (roughly Rs. 13,29,477 crore) a-year search engine business faces from Bing — a minor player that has risen in prominence recently after the integration of the artificial intelligence tech behind ChatGPT.

Google’s reaction to the threat was “panic” as the company earns an estimated $3 billion (roughly Rs. 24,625 crore) in annual revenue from the Samsung contract, the report said, citing internal messages.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google’s Rush to Take Its AI Chatbot Bard Public Led to Ethical Lapses, Employees Say

Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.

One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology’s potential harms. But the November 2022 debut of rival OpenAI’s popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.

That was a markedly faster pace of development for the technology, and one that could have profound societal impact. The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said. The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development, they said.

Google is aiming to revitalize its maturing search business around the cutting-edge technology, which could put generative AI into millions of phones and homes around the world — ideally before OpenAI, with the backing of Microsoft, beats the company to it.

“AI ethics has taken a back seat,” said Meredith Whittaker, president of the Signal Foundation, which supports private messaging, and a former Google manager. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

In response to questions from Bloomberg, Google said responsible AI remains a top priority at the company. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said Brian Gabriel, a spokesperson. The team working on responsible AI shed at least three members in a January round of layoffs at the company, including the head of governance and programs. The cuts affected about 12,000 workers at Google and its parent company.

Google, which over the years spearheaded much of the research underpinning today’s AI advancements, had not yet integrated a consumer-friendly version of generative AI into its products by the time ChatGPT launched. The company was cautious of its power and the ethical considerations that would go hand-in-hand with embedding the technology into search and other marquee products, the employees said.

By December, senior leadership decreed a competitive “code red” and changed its appetite for risk. Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said. Still, it needed to get its ethics teams on board. That month, the AI governance lead, Jen Gennai, convened a meeting of the responsible innovation group, which is charged with upholding the company’s AI principles.

Gennai suggested that some compromises might be necessary in order to pick up the pace of product releases. The company assigns scores to its products in several important categories, meant to measure their readiness for release to the public. In some, like child safety, engineers still need to clear the 100 percent threshold. But Google may not have time to wait for perfection in other areas, she advised in the meeting. “‘Fairness’ may not be, we have to get to 99 percent,” Gennai said, referring to its term for reducing bias in products. “On ‘fairness,’ we might be at 80, 85 percent, or something” to be enough for a product launch, she added.

In February, one employee raised issues in an internal message group: “Bard is worse than useless: please do not launch.” The note was viewed by nearly 7,000 people, many of whom agreed that the AI tool’s answers were contradictory or even egregiously wrong on simple factual queries.

The next month, Gennai overruled a risk evaluation submitted by members of her team stating Bard was not ready because it could cause harm, according to people familiar with the matter. Shortly after, Bard was opened up to the public — with the company calling it an “experiment”.

In a statement, Gennai said it wasn’t solely her decision. After the team’s evaluation she said she “added to the list of potential risks from the reviewers and escalated the resulting analysis” to a group of senior leaders in product, research and business. That group then “determined it was appropriate to move forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” she said.

Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety. Researchers building AI outnumber those focused on safety by a 30-to-1 ratio, the Center for Humane Technology said at a recent presentation, underscoring the often lonely experience of voicing concerns in a large organization.

As progress in artificial intelligence accelerates, new concerns about its societal effects have emerged. Large language models, the technologies that underpin ChatGPT and Bard, ingest enormous volumes of digital text from news articles, social media posts and other internet sources, and then use that written material to train software that predicts and generates content on its own when given a prompt or query. That means that by their very nature, the products risk regurgitating offensive, harmful or inaccurate speech.

But ChatGPT’s remarkable debut meant that by early this year, there was no turning back. In February, Google began a blitz of generative AI product announcements, touting chatbot Bard, and then the company’s video service YouTube, which said creators would soon be able to virtually swap outfits in videos or create “fantastical film settings” using generative AI. Two weeks later, Google announced new AI features for Google Cloud, showing how users of Docs and Slides will be able to, for instance, create presentations and sales-training documents, or draft emails. On the same day, the company announced that it would be weaving generative AI into its health-care offerings. Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.

The challenge of developing cutting-edge artificial intelligence in an ethical manner has long spurred internal debate. The company has faced high-profile blunders over the past few years, including an embarrassing incident in 2015 when its Photos service mistakenly labeled images of a Black software developer and his friend as “gorillas.”

Three years later, the company said it did not fix the underlying AI technology, but instead erased all results for the search terms “gorilla,” “chimp,” and “monkey,” a solution that it says “a diverse group of experts” weighed in on. The company also built up an ethical AI unit tasked with carrying out proactive work to make AI fairer for its users.

But a significant turning point, according to more than a dozen current and former employees, was the ousting of AI researchers Timnit Gebru and Margaret Mitchell, who co-led Google’s ethical AI team until they were pushed out in December 2020 and February 2021 over a dispute regarding fairness in the company’s AI research. Samy Bengio, a computer scientist who oversaw Gebru and Mitchell’s work, and several other researchers would end up leaving for competitors in the intervening years.

After the scandal, Google tried to improve its public reputation. The responsible AI team was reorganized under Marian Croak, then a vice president of engineering. She pledged to double the size of the AI ethics team and strengthen the group’s ties with the rest of the company.

Even after the public pronouncements, some found it difficult to work on ethical AI at Google. One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review. Managers protested that it was getting in the way of their “real work,” the person said.

Those who remained working on ethical AI at Google were left questioning how to do the work without putting their own jobs at risk. “It was a scary time,” said Nyalleng Moorosi, a former researcher at the company who is now a senior researcher at the Distributed AI Research Institute, founded by Gebru. Doing ethical AI work means “you were literally hired to say, I don’t think this is population-ready,” she added. “And so you are slowing down the process.”

To this day, AI ethics reviews of products and features, two employees said, are almost entirely voluntary at the company, with the exception of research papers and the review process conducted by Google Cloud on customer deals and products for release. AI research in delicate areas like biometrics, identity features, or kids are given a mandatory “sensitive topics” review by Gennai’s team, but other projects do not necessarily receive ethics reviews, though some employees reach out to the ethical AI team even when not required.

Still, when employees on Google’s product and engineering teams look for a reason the company has been slow to market on AI, the public commitment to ethics tends to come up. Some in the company believed new tech should be in the hands of the public as soon as possible, in order to make it better faster with feedback.

Before the code red, it could be hard for Google engineers to get their hands on the company’s most advanced AI models at all, another former employee said. Engineers would often start brainstorming by playing around with other companies’ generative AI models to explore the possibilities of the technology before figuring out a way to make it happen within the bureaucracy, the former employee said.

“I definitely see some positive changes coming out of ‘code red’ and OpenAI pushing Google’s buttons,” said Gaurav Nemade, a former Google product manager who worked on its chatbot efforts until 2020. “Can they actually be the leaders and challenge OpenAI at their own game?” Recent developments — like Samsung reportedly considering replacing Google with Microsoft’s Bing, whose tech is powered by ChatGPT, as the search engine on its devices — have underscored the first-mover advantage in the market right now.

Some at the company said they believe that Google has conducted sufficient safety checks with its new generative AI products, and that Bard is safer than competing chatbots. But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.

Teams working on the new AI features have been siloed, making it hard for rank-and-file Googlers to see the full picture of what the company is working on. Company mailing lists and internal channels that were once places where employees could openly voice their doubts have been curtailed with community guidelines under the pretext of reducing toxicity; several employees said they viewed the restrictions as a way of policing speech.

“There is a great amount of frustration, a great amount of this sense of like, what are we even doing?” Mitchell said. “Even if there aren’t firm directives at Google to stop doing ethical work, the atmosphere is one where people who are doing the kind of work feel really unsupported and ultimately will probably do less good work because of it.”

When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings — a stance that has been critiqued by some in the field as a form of marketing — rather than the day-to-day scenarios that already have the potential to be harmful.

El-Mahdi El-Mhamdi, a former research scientist at Google, said he left the company in February over its refusal to engage with ethical AI issues head-on. Late last year, he said, he co-authored a paper that showed it was mathematically impossible for foundational AI models to be large, robust and remain privacy-preserving.

He said the company raised questions about his participation in the research while using his corporate affiliation. Rather than go through the process of defending his work, he said he volunteered to drop the affiliation with Google and use his academic credentials instead.

“If you want to stay on at Google, you have to serve the system and not contradict it,” El-Mhamdi said.

© 2023 Bloomberg LP


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version