OpenAI Rolls Out Incognito Mode on ChatGPT That Does Not Save Users’ Conversation History

OpenAI is introducing what one employee called an “incognito mode” for its hit chatbot ChatGPT that does not save users’ conversation history or use it to improve its artificial intelligence, the company said Tuesday.

The San Francisco-based startup also said it planned a “ChatGPT Business” subscription with additional data controls.

The move comes as scrutiny has grown over how ChatGPT and other chatbots it inspired manage hundreds of millions of users’ data, commonly used to improve, or “train”, AI.

Italy last month banned ChatGPT for possible privacy violations, saying OpenAI could resume the service if it met demands such as giving consumers tools to object to the processing of their data. France and Spain also began probing the service.

Mira Murati, OpenAI’s chief technology officer, told Reuters the company was compliant with European privacy law and is working to assure regulators.

The new features did not arise from Italy’s ChatGPT ban, she said, but from a months-long effort to put users “in the driver’s seat” regarding data collection.

“We’ll be moving more and more in this direction of prioritizing user privacy,” Murati said, with the goal that “it’s completely eyes off and the models are super aligned: they do the things that you want to do”.

User information has helped OpenAI make its software more reliable and reduce political bias, among other issues, she said, but added that the company still has challenges to tackle.

Tuesday’s product release lets users switch off “Chat History & Training” in their settings and export their data.

Nicholas Turley, the OpenAI product officer who likened this to an internet browser’s incognito mode, said the company still would retain conversations for 30 days to monitor for abuse before permanently deleting them.

In addition, the company’s business subscription available in the coming months will not use conversations for AI model training by default.

Microsoft, which has invested in OpenAI, already offers ChatGPT to businesses. Murati said that service would appeal to the cloud provider’s existing customers.

© Thomson Reuters 2023


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT Performs Worse Than Students at Accounting Exams, Struggles With Mathematical Process

Researchers found students to have fared better at accounting exams than ChatGPT, OpenAI’s chatbot product.

Despite this, they said that ChatGPT’s performance was “impressive” and that it was a “game changer that will change the way everyone teaches and learns – for the better.” The researchers from Brigham Young University (BYU), US, and 186 other universities wanted to know how OpenAI‘s technology would fare on accounting exams. They have published their findings in the journal Issues in Accounting Education.

In the researchers’ accounting exam, students scored an overall average of 76.7 percent, compared to ChatGPT’s score of 47.4 percent.

While in 11.3 percent of the questions, ChatGPT was found to score higher than the student average, doing particularly well on accounting information systems (AIS) and auditing, the AI bot was found to perform worse on tax, financial, and managerial assessments. Researchers think this could possibly be because ChatGPT struggled with the mathematical processes required for the latter type.

The AI bot, which uses machine learning to generate natural language text, was further found to do better on true/false questions (68.7 percent correct) and multiple-choice questions (59.5 percent), but struggled with short-answer questions (between 28.7 and 39.1 percent).

In general, the researchers said that higher-order questions were harder for ChatGPT to answer. In fact, sometimes ChatGPT was found to provide authoritative written descriptions for incorrect answers, or answer the same question different ways.

They also found that ChatGPT often provided explanations for its answers, even if they were incorrect. Other times, it went on to select the wrong multiple-choice answer, despite providing accurate descriptions.

Researchers importantly noted that ChatGPT sometimes made up facts. For example, when providing a reference, it generated a real-looking reference that was completely fabricated. The work and sometimes the authors did not even exist.

The bot was seen to also make nonsensical mathematical errors such as adding two numbers in a subtraction problem, or dividing numbers incorrectly.

Wanting to add to the intense ongoing debate about how how models like ChatGPT should factor into education, lead study author David Wood, a BYU professor of accounting, decided to recruit as many professors as possible to see how the AI fared against actual university accounting students.

His co-author recruiting pitch on social media exploded: 327 co-authors from 186 educational institutions in 14 countries participated in the research, contributing 25,181 classroom accounting exam questions.

They also recruited undergraduate BYU students to feed another 2,268 textbook test bank questions to ChatGPT. The questions covered AIS, auditing, financial accounting, managerial accounting and tax, and varied in difficulty and type (true/false, multiple choice, short answer).


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Alphabet CEO Sundar Pichai Reaps $226 Million Compensation in 2022 Amid Layoffs

Alphabet Chief Executive Sundar Pichai received total compensation of about $226 million (roughly Rs. 1,850 crore) in 2022, more than 800 times the median employee’s pay, the company said in a securities filing on Friday.

Pichai’s compensation included stock awards of about $218 million (roughly Rs. 1,800 crore), the filing showed.

The pay disparity comes at a time when Alphabet, the parent company of Google, has been cutting jobs globally, The Mountain View, California-based company announced plans to cut 12,000 jobs around the world in January, equivalent to 6 percent of its global workforce.

Early this month, hundreds of Google employees staged a walkout at the company’s London offices following a dispute over layoffs.

In March, Google employees staged a walkout at the company’s Zurich offices after more than 200 workers were laid off.

Meanwhile, the company is working rapidly towards making its chatbot Bard stand out among the competitors. On Friday, Google announced that Bard, its generative artificial intelligence (AI) chatbot, to help people write code to develop software, as the tech giant plays catch-up in a fast-moving race on AI technology.

Bard will be able to code in 20 programming languages including Java, C++ and Python, and can also help debug and explain code to users, Google said on Friday.

The company said Bard can also optimise code to make it faster or more efficient with simple prompts such as “Could you make that code faster?”.

Currently, Bard can be accessed by a small set of users who can chat with the bot and ask questions instead of running Google’s traditional search tool.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google’s Rush to Take Its AI Chatbot Bard Public Led to Ethical Lapses, Employees Say

Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.

One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology’s potential harms. But the November 2022 debut of rival OpenAI’s popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.

That was a markedly faster pace of development for the technology, and one that could have profound societal impact. The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said. The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development, they said.

Google is aiming to revitalize its maturing search business around the cutting-edge technology, which could put generative AI into millions of phones and homes around the world — ideally before OpenAI, with the backing of Microsoft, beats the company to it.

“AI ethics has taken a back seat,” said Meredith Whittaker, president of the Signal Foundation, which supports private messaging, and a former Google manager. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

In response to questions from Bloomberg, Google said responsible AI remains a top priority at the company. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said Brian Gabriel, a spokesperson. The team working on responsible AI shed at least three members in a January round of layoffs at the company, including the head of governance and programs. The cuts affected about 12,000 workers at Google and its parent company.

Google, which over the years spearheaded much of the research underpinning today’s AI advancements, had not yet integrated a consumer-friendly version of generative AI into its products by the time ChatGPT launched. The company was cautious of its power and the ethical considerations that would go hand-in-hand with embedding the technology into search and other marquee products, the employees said.

By December, senior leadership decreed a competitive “code red” and changed its appetite for risk. Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said. Still, it needed to get its ethics teams on board. That month, the AI governance lead, Jen Gennai, convened a meeting of the responsible innovation group, which is charged with upholding the company’s AI principles.

Gennai suggested that some compromises might be necessary in order to pick up the pace of product releases. The company assigns scores to its products in several important categories, meant to measure their readiness for release to the public. In some, like child safety, engineers still need to clear the 100 percent threshold. But Google may not have time to wait for perfection in other areas, she advised in the meeting. “‘Fairness’ may not be, we have to get to 99 percent,” Gennai said, referring to its term for reducing bias in products. “On ‘fairness,’ we might be at 80, 85 percent, or something” to be enough for a product launch, she added.

In February, one employee raised issues in an internal message group: “Bard is worse than useless: please do not launch.” The note was viewed by nearly 7,000 people, many of whom agreed that the AI tool’s answers were contradictory or even egregiously wrong on simple factual queries.

The next month, Gennai overruled a risk evaluation submitted by members of her team stating Bard was not ready because it could cause harm, according to people familiar with the matter. Shortly after, Bard was opened up to the public — with the company calling it an “experiment”.

In a statement, Gennai said it wasn’t solely her decision. After the team’s evaluation she said she “added to the list of potential risks from the reviewers and escalated the resulting analysis” to a group of senior leaders in product, research and business. That group then “determined it was appropriate to move forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” she said.

Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety. Researchers building AI outnumber those focused on safety by a 30-to-1 ratio, the Center for Humane Technology said at a recent presentation, underscoring the often lonely experience of voicing concerns in a large organization.

As progress in artificial intelligence accelerates, new concerns about its societal effects have emerged. Large language models, the technologies that underpin ChatGPT and Bard, ingest enormous volumes of digital text from news articles, social media posts and other internet sources, and then use that written material to train software that predicts and generates content on its own when given a prompt or query. That means that by their very nature, the products risk regurgitating offensive, harmful or inaccurate speech.

But ChatGPT’s remarkable debut meant that by early this year, there was no turning back. In February, Google began a blitz of generative AI product announcements, touting chatbot Bard, and then the company’s video service YouTube, which said creators would soon be able to virtually swap outfits in videos or create “fantastical film settings” using generative AI. Two weeks later, Google announced new AI features for Google Cloud, showing how users of Docs and Slides will be able to, for instance, create presentations and sales-training documents, or draft emails. On the same day, the company announced that it would be weaving generative AI into its health-care offerings. Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.

The challenge of developing cutting-edge artificial intelligence in an ethical manner has long spurred internal debate. The company has faced high-profile blunders over the past few years, including an embarrassing incident in 2015 when its Photos service mistakenly labeled images of a Black software developer and his friend as “gorillas.”

Three years later, the company said it did not fix the underlying AI technology, but instead erased all results for the search terms “gorilla,” “chimp,” and “monkey,” a solution that it says “a diverse group of experts” weighed in on. The company also built up an ethical AI unit tasked with carrying out proactive work to make AI fairer for its users.

But a significant turning point, according to more than a dozen current and former employees, was the ousting of AI researchers Timnit Gebru and Margaret Mitchell, who co-led Google’s ethical AI team until they were pushed out in December 2020 and February 2021 over a dispute regarding fairness in the company’s AI research. Samy Bengio, a computer scientist who oversaw Gebru and Mitchell’s work, and several other researchers would end up leaving for competitors in the intervening years.

After the scandal, Google tried to improve its public reputation. The responsible AI team was reorganized under Marian Croak, then a vice president of engineering. She pledged to double the size of the AI ethics team and strengthen the group’s ties with the rest of the company.

Even after the public pronouncements, some found it difficult to work on ethical AI at Google. One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review. Managers protested that it was getting in the way of their “real work,” the person said.

Those who remained working on ethical AI at Google were left questioning how to do the work without putting their own jobs at risk. “It was a scary time,” said Nyalleng Moorosi, a former researcher at the company who is now a senior researcher at the Distributed AI Research Institute, founded by Gebru. Doing ethical AI work means “you were literally hired to say, I don’t think this is population-ready,” she added. “And so you are slowing down the process.”

To this day, AI ethics reviews of products and features, two employees said, are almost entirely voluntary at the company, with the exception of research papers and the review process conducted by Google Cloud on customer deals and products for release. AI research in delicate areas like biometrics, identity features, or kids are given a mandatory “sensitive topics” review by Gennai’s team, but other projects do not necessarily receive ethics reviews, though some employees reach out to the ethical AI team even when not required.

Still, when employees on Google’s product and engineering teams look for a reason the company has been slow to market on AI, the public commitment to ethics tends to come up. Some in the company believed new tech should be in the hands of the public as soon as possible, in order to make it better faster with feedback.

Before the code red, it could be hard for Google engineers to get their hands on the company’s most advanced AI models at all, another former employee said. Engineers would often start brainstorming by playing around with other companies’ generative AI models to explore the possibilities of the technology before figuring out a way to make it happen within the bureaucracy, the former employee said.

“I definitely see some positive changes coming out of ‘code red’ and OpenAI pushing Google’s buttons,” said Gaurav Nemade, a former Google product manager who worked on its chatbot efforts until 2020. “Can they actually be the leaders and challenge OpenAI at their own game?” Recent developments — like Samsung reportedly considering replacing Google with Microsoft’s Bing, whose tech is powered by ChatGPT, as the search engine on its devices — have underscored the first-mover advantage in the market right now.

Some at the company said they believe that Google has conducted sufficient safety checks with its new generative AI products, and that Bard is safer than competing chatbots. But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.

Teams working on the new AI features have been siloed, making it hard for rank-and-file Googlers to see the full picture of what the company is working on. Company mailing lists and internal channels that were once places where employees could openly voice their doubts have been curtailed with community guidelines under the pretext of reducing toxicity; several employees said they viewed the restrictions as a way of policing speech.

“There is a great amount of frustration, a great amount of this sense of like, what are we even doing?” Mitchell said. “Even if there aren’t firm directives at Google to stop doing ethical work, the atmosphere is one where people who are doing the kind of work feel really unsupported and ultimately will probably do less good work because of it.”

When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings — a stance that has been critiqued by some in the field as a form of marketing — rather than the day-to-day scenarios that already have the potential to be harmful.

El-Mahdi El-Mhamdi, a former research scientist at Google, said he left the company in February over its refusal to engage with ethical AI issues head-on. Late last year, he said, he co-authored a paper that showed it was mathematically impossible for foundational AI models to be large, robust and remain privacy-preserving.

He said the company raised questions about his participation in the research while using his corporate affiliation. Rather than go through the process of defending his work, he said he volunteered to drop the affiliation with Google and use his academic credentials instead.

“If you want to stay on at Google, you have to serve the system and not contradict it,” El-Mhamdi said.

© 2023 Bloomberg LP


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft Adds Bing AI Chatbot to SwiftKey Keyboard on Android and iOS: All Details

Microsoft SwiftKey keyboard app is getting the new AI-powered Bing chat search engine based on the Generative Pre-trained Transformer (GPT) technology. With this latest integration, users can chat with the bot directly from their mobile keyboard, customise their texts and search for things without swapping between apps. Bing offers three new key features on SwiftKey — Chat, Search, and Tone. With the Chat feature, users can go for detailed queries and the Search allows them to quickly explore the Web from the keyboard. The Tone functionality aims to improve communication and lets users tailor texts with AI to fit any situation. Microsoft earlier released Bing and Edge browser apps for smartphones

On Thursday, Microsoft announced the addition of AI-powered Bing to the SwiftKey keyboard app, via a blogpost. With this update, Android and iOS users will get access to the unique features of Bing Chat. The latest Microsoft SwiftKey 3.0.1 update is rolling out via App Store. Also, Bing is available via the Microsoft Start app for select users as well.

As mentioned, Bing comes with Chat, Search, and Tone options and they can be accessed from the Bing icon displayed on top of the keyboard. The Chat feature can be put in to make detailed queries. For instance, Microsoft says it will be useful to respond with a clever pun to someone’s message and text some new friends to propose a good local restaurant.

SwiftKey’s Tone feature can be used to communicate more effectively by using AI to customise the in-progress text to fit any situation. It will help users to frame their sentences to sound more professional, casual, polite, or concise enough for a social post.

The Search functionality allows users to quickly search the Web from their keyboard, without switching apps. This can be useful when a user is chatting with a friend and mid-conversation, they want to look up relevant information like the weather, restaurants nearby, or stock prices.

Search is open for all users, but to access the Tone and Chat sections, users will have to sign up to their Microsoft account. To avail of the latest functionality, SwiftKey should be set as the primary keyboard on Android and iOS.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Alibaba Invites Businesses to Test Its ChatGPT-Like AI Chatbot: Report

Tech giant Alibaba is seeking companies to test its Tongyi Qianwen AI chatbot, business publication STAR Market Daily reported on Friday, joining the rush to emulate the explosive success of ChatGPT.

The free-to-use ChatGPT, a large language model (LLM) application created by Microsoft-backed OpenAI, was released to the public last November and can generate articles and essays on demand in response to user prompts.

Alibaba has opened up registration for businesses to conduct testing for its AI application, STAR Market reported without specifying details.

A source close to the matter confirmed to Reuters that the application was an LLM targeted at business users.

Alibaba’s cloud computing division published a teaser on Friday, posting a message on social media simply stating: “Hello, my name is Tongyi Qianwen, this is our first time meeting, I welcome your feedback.”

The official website for the chatbot application merely has boxes to enter phone numbers and email addresses to request an invitation but provides no specific details about its exact use.

Alibaba Cloud did not respond immediately to an emailed request for comment.

A formal launch is expected at an Alibaba Cloud event on Tuesday. Daniel Zhang, CEO of Alibaba Group as well as the company’s cloud division, is scheduled to speak at the event.

Others to have joined the AI chatbot race include Baidu, with its Ernie Bot application open only to trial users at the moment.

On Saturday network gear maker Huawei Technologies is due to stage an event unveiling Pangu, its natural language processing (NLP) AI model.

SenseTime is also holding an event next week to showcase “cutting-edge advancements in artificial intelligence software”.

Last week Alibaba announced that it will restructure into six standalone divisions, each with its own board and CEO. Zhang is set to stay as the cloud division’s CEO.

© Thomson Reuters 2023
 


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

India Among Top 3 Markets for ChatGPT-Powered Bing Search Engine, Says Microsoft

India has emerged as one of the top three markets for Microsoft’s new Bing preview, which has ChatGPT incorporated into it, and is its biggest image creator market, a senior company official has said, asserting that the search engine is much better than its rival Google.

Powered by ChatGPT, Microsoft launched the new Bing preview on February 7. ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022.

“Search has changed and will change. It’s not going away. Just like when television came into existence, radio didn’t go away, but TV got a lot more excitement. Same will happen here. The new capabilities of AI of chat of answers are now increasingly exciting because they’re helping answer questions that search didn’t do. And with Bing, we are completely unique in that leadership today,” Yusuf Mehdi, corporate vice president and consumer chief marketing officer of Microsoft told PTI.

Microsoft, under its Indian-American CEO Satya Nadella, has a vision about the world moving from search engines to what it thinks of “as your co-pilot” for the web. That does four things: do better search, give answers to questions, chat and create content.

“We’re now having over 100 million daily activities on Bing. We are in 169 countries and India is one of the top three markets for us in this new Bing preview. In fact, India is the top image creator market, based on users using the feature, which is really pretty neat,” Mehdi said.

“So, of all the countries in the world, India’s the top. With some of these visual capabilities, one of the things we also announced this last week is knowledge cards. So that you can now get richer views of the searches. We are seeing a Bollywood actor Kiara Advani as the top search in knowledge cards with other actors rounding out in the Indian market. So, seeing great engagement there (in India),” he said.

Responding to a question, he said, the Indian market is very active as people in the country are using many of the new features that Microsoft has recently launched.

The new Bing has been receiving very positive feedback from its users, he said.

“The feedback is overwhelmingly positive as people prefer it as a new way to search, not just the answers, but the ability to chat and search. That’s an important thing because it marks a difference between us and Google,” he said.

“Google is trying to say that the chat has nothing to do with search and they’re separate products. We think they’re one integrated product. … In chat we got a lot of feedback about people wanting to use it for more than just search,” he said.

People want to do social entertainment and want to be able to talk to the AI chatbot, Mehdi said, adding Microsoft continues to improve the factual accuracy of answers.

“Because while it can be very creative, there are still areas where we can do a better job. Things like math questions, things like searches about individual people, we are still doing more work there,” he said.

Some of the things like knowledge cards and stories are something very unique to Bing, which Google doesn’t do, he said.

“When you do a search, we can now give you a much richer answer of what that looks like. We can give you, for example, five images of the thing you’re looking for. So, if you’re searching, for example, Kara Advani, we can give you the actor and we can show you various images in the knowledge card, a lot of information,” he said.

“So we are automating particular answers for the Indian market for the top searches, whether that’s actors or movie stars or whether it’s top news in India or top travel sites in India. We’re doing a lot of those special cards for India,” Mehdi said.

Observing that search is still a magical tool, Mehdi said this has evolved and now it is also being used for planning and getting answers to complicated questions.

Bing with the new AI can respond to complicated questions which regular searches cannot do, he said.

“One of the things that we’ve made progress with Bing is we’re now able to answer those questions, many of those questions that Google cannot do because we’re using ChatGPT to help refine… because we’re using AI to help answer the question,” he said.

Google has taken a different approach, so far, he said.

“They have a very separate chat product called Bard that’s different from Google search. They haven’t done any of the AI work in Google search. We’ve brought that right in. So, we have a much better offering now for people. And we think that is the future of bringing search and chat and creation together. That’s why our vision’s so different from their vision,” Mehdi said.

He noted that the latest development would have an impact on the news industry as well.

“A lot of how the news industry has worked with search today is that there’s a very delicate balance of …do great journalism like yourself, then someone searches for the latest news, let’s say in Israel, something happened. And then there might be a snippet of information and then I click on it to go to the story,” he said.

“Now with AI and with chat, you can get even more of a clear answer, but not necessarily the article or the great reporting. That will change a little bit. What we are doing is we’re providing links now to drive more content and more traffic to people.

“I think what’ll happen is we’ll see more traffic go to news agencies and new publishers because of what we’re doing in Bing to help better get the answer. But it will change the advertising model. We think there’ll be fewer ads that will be more relevant and have higher returns,” Mehdi said.


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta Steps Up Chatbot Buzz, Announces Research Tool LLaMA as Rival to Microsoft’s ChatGPT, Google’s LaMDA

Meta Platforms introduced a research tool for building artificial intelligence-based chatbots and other products, seeking to create a buzz for its own technology in a field lately focused on internet rivals Google and Microsoft.

The tool, LLaMA, is Meta‘s latest entry in the realm of large language models, which “have shown a lot of promise in generating text, having conversations, summarizing written material and more complicated tasks like solving math theorems or predicting protein structures,” Chief Executive Officer Mark Zuckerberg said in an Facebook post on Friday.

For now LLaMA isn’t in use in Meta’s products, which include social networks Facebook and Instagram, according to a spokesperson. The company plans to make the technology available to AI researchers.

“Meta is committed to this open model of research,” Zuckerberg wrote.

Large language models are massive AI systems that suck up enormous volumes of digital text — from news articles, social media posts or other internet sources — and use that written material to train software that predicts and generates content on its own when given a prompt or query. The models can be used for tasks like writing essays, composing tweets, generating chatbot conversations and suggesting computer programming code. 

The technology has become popular, and controversial, in recent months as more companies have started to build them and introduce tests of products based on the models, spotlighting a new area of competition among tech giants. Microsoft is investing billions in OpenAI, the maker of GPT-3, the large language model that runs the ChatGPT chatbot. The software maker this month unveiled a test version of its Bing search engine running on OpenAI’s chat technology, which raised immediate concerns over its sometimes-inappropriate responses.

Alphabet‘s Google has a model called LaMDA, or Language Model for Dialogue Applications. The internet search and advertising leader is testing a chat-based, AI-powered search product called Bard, which also still has some glitches.

Meta previously launched a large language model called OPT-175B, but LLaMA is a newer and more advanced system. Another model Meta released late last year, Galactica, was quickly pulled back after researchers discovered it was routinely sharing biased or inaccurate information with people who used it.

Zuckerberg has made AI a top priority inside the company, often talking about its importance to improving Meta’s products on earnings conference calls and in interviews. While LLaMA is not being used in Meta products now, it’s possible that it will be in the future. Meta for now relies on AI for all kinds of functions, including content moderation and ranking material that appears in user feeds. 

Making the LLaMA model open-source allows outsiders to see more clearly how the system works, tweak it to their needs and collaborate on related projects. Last year, Big Science and Hugging Face released BLOOM, an open-source LLM that was intended to make this kind of technology more accessible.

© 2023 Bloomberg LP


Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2023 hub.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft’s Bing Plans AI Ads, Testing Them in Early Version of Chatbot

Microsoft has started discussing with ad agencies how it plans to make money from its revamped Bing search engine powered by generative artificial intelligence as the tech company seeks to battle Google’s dominance.

In a meeting with a major ad agency this week, Microsoft showed off a demo of the new Bing and said it plans to allow paid links within responses to search results, said an ad executive, who spoke about the private meeting on the condition of anonymity.

Generative AI, which can produce original answers in a human voice in response to open-ended questions or requests, has recently captivated the world. Last week, Microsoft and Alphabet‘s Google announced new generative AI chatbots a day apart from the other. Those bots, which have not yet rolled out widely to users, will be able to synthesize material on the web for complex search queries.

Early search results and conversations with Microsoft’s Bing and Google’s chatbot called Bard have shown they can be unpredictable. Alphabet lost $100 billion (nearly Rs. 8,27,500 crore) in market value on the day when it released a promotional video for Bard that showed the chatbot sharing inaccurate information.

Microsoft expects the more human responses from the Bing AI chatbot will generate more users for its search function and therefore more advertisers. Advertisements within the Bing chatbot may also enjoy more prominence on the page compared to traditional search ads.

Microsoft is already testing ads in its early version of the Bing chatbot, which is available to a limited number of users, according to the ad executive and ads seen by Reuters this week.

The company said it is taking traditional search ads, in which brands pay to have their websites or products appear on search results for keywords related to their business, and inserting them into responses generated by the Bing chatbot, the ad executive said.

Microsoft declined to comment on the specifics of its plans.

Microsoft is also planning another ad format within the chatbot that will be geared toward advertisers in specific industries. For example, when a user asks the new AI-powered Bing “what are the best hotels in Mexico?”, hotel ads could pop up, according to the ad executive.

Integrating ads into the Bing chatbot, which can be expanded to fill the top of the search page, could help ensure that ads are not pushed further down the page below the chatbot.

Omnicom, a major ad group that works with brands like AT&T and Unilever, has told clients that search ads could generate lower revenue in the short term if the chatbots take up the top of search pages without including any ads, according to a note to clients last week, which was reviewed by Reuters.

The new Bing, which has a waitlist of millions of people for access, is a potentially lucrative opportunity for Microsoft. The company said during an investor and press presentation last week that every percentage point of market share it gains in the search advertising market could bring in another $2 billion (nearly Rs. 16,550 crore) of ad revenue.

Microsoft’s Edge web browser, which uses the Bing search engine, has a market share under 5 percent worldwide, according to one estimate from web analytics firm StatCounter.

Michael Cohen, executive vice president of performance media at media agency Horizon Media, who received a demo of Bing during a separate meeting with Microsoft representatives, said the company indicated that links at the bottom of Bing’s AI-generated search responses could be places for ads.

“They seem intent on starting off immediately with paid ads integrated,” Cohen said, adding that Microsoft said more information about the strategy could come in early March.

This week, when a Reuters reporter asked the new version of Bing outfitted with AI for the price of car air filters, Bing included advertisements for filters sold by auto parts website Parts Geek.

Parts Geek did not immediately respond to questions about whether it was aware of its ads appearing in the new Bing chatbot.

Microsoft, when asked about the Parts Geek ads, said the potential of the new AI technology in advertising is only beginning to be explored and it aims to work with its partners and the ad industry.

Despite the early tests, Microsoft has not provided a timeline for when brands will be able to directly purchase ads within the chatbot, Cohen and the ad executive said.

In the long term, conversational AI is likely to become the dominant way consumers search on the internet, Omnicom said in its letter to clients.

“It is not an exaggeration to say that (Microsoft and Google’s) announcements signal the biggest change to search in 20 years,” Omnicom said.

© Thomson Reuters 2023


The OnePlus 11 5G was launched at the company’s Cloud 11 launch event which also saw the debut of several other devices. We discuss this new handset and all of OnePlus’ new hardware on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version