ChatGPT Performs Worse Than Students at Accounting Exams, Struggles With Mathematical Process

Researchers found students to have fared better at accounting exams than ChatGPT, OpenAI’s chatbot product.

Despite this, they said that ChatGPT’s performance was “impressive” and that it was a “game changer that will change the way everyone teaches and learns – for the better.” The researchers from Brigham Young University (BYU), US, and 186 other universities wanted to know how OpenAI‘s technology would fare on accounting exams. They have published their findings in the journal Issues in Accounting Education.

In the researchers’ accounting exam, students scored an overall average of 76.7 percent, compared to ChatGPT’s score of 47.4 percent.

While in 11.3 percent of the questions, ChatGPT was found to score higher than the student average, doing particularly well on accounting information systems (AIS) and auditing, the AI bot was found to perform worse on tax, financial, and managerial assessments. Researchers think this could possibly be because ChatGPT struggled with the mathematical processes required for the latter type.

The AI bot, which uses machine learning to generate natural language text, was further found to do better on true/false questions (68.7 percent correct) and multiple-choice questions (59.5 percent), but struggled with short-answer questions (between 28.7 and 39.1 percent).

In general, the researchers said that higher-order questions were harder for ChatGPT to answer. In fact, sometimes ChatGPT was found to provide authoritative written descriptions for incorrect answers, or answer the same question different ways.

They also found that ChatGPT often provided explanations for its answers, even if they were incorrect. Other times, it went on to select the wrong multiple-choice answer, despite providing accurate descriptions.

Researchers importantly noted that ChatGPT sometimes made up facts. For example, when providing a reference, it generated a real-looking reference that was completely fabricated. The work and sometimes the authors did not even exist.

The bot was seen to also make nonsensical mathematical errors such as adding two numbers in a subtraction problem, or dividing numbers incorrectly.

Wanting to add to the intense ongoing debate about how how models like ChatGPT should factor into education, lead study author David Wood, a BYU professor of accounting, decided to recruit as many professors as possible to see how the AI fared against actual university accounting students.

His co-author recruiting pitch on social media exploded: 327 co-authors from 186 educational institutions in 14 countries participated in the research, contributing 25,181 classroom accounting exam questions.

They also recruited undergraduate BYU students to feed another 2,268 textbook test bank questions to ChatGPT. The questions covered AIS, auditing, financial accounting, managerial accounting and tax, and varied in difficulty and type (true/false, multiple choice, short answer).


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Bard Now Helps Write Software Codes in 20 Programming Languages

Alphabet’s Google said on Friday it will update Bard, its generative artificial intelligence (AI) chatbot, to help people write code to develop software, as the tech giant plays catch-up in a fast-moving race on AI technology.

Last month, the company started the public release of Bard to gain ground on Microsoft.

The release of ChatGPT, a chatbot from the Microsoft-backed startup OpenAI, last year caused a sprint in the technology sector to put AI into more users’ hands.

Google describes Bard as an experiment allowing collaboration with generative AI, technology that relies on past data to create rather than identify content.

Bard will be able to code in 20 programming languages including Java, C++ and Python, and can also help debug and explain code to users, Google said on Friday.

The company said Bard can also optimise code to make it faster or more efficient with simple prompts such as “Could you make that code faster?”.

Currently, Bard can be accessed by a small set of users who can chat with the bot and ask questions instead of running Google’s traditional search tool.

The company began the public release of its chatbot Bard in late March this year, seeking users and feedback to gain ground on Microsoft in a fast-moving race on artificial intelligence technology. Bard could show three different versions or “drafts” of any given answer among which users could toggle, and it displayed a button stating “Google it,” should a user desire web results for a query.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

SAP Plans to Use OpenAI’s Chatbot ChatGPT Amid Growth in Quarterly Revenue

Business software maker SAP on Friday reported first-quarter revenue above analysts’ expectations, backed by growth in its cloud business but lowered its outlook for the year due to the divestment of its Qualtrics unit.

SAP, which in January announced plans to cut 3,000 jobs as it looked to cut costs, foresees no more restructuring this year and plans to use artificial intelligence technologies like generative AI in its products.

While tougher economic conditions have riled big technology companies, SAP has still been able to grow its revenue by 10 percent in the first quarter to EUR 7.44 billion (roughly Rs. 60,700 crore), beating a company-provided consensus.

It said it was working with Microsoft-backed OpenAI‘s chatbot ChatGPT that can provide human-like responses to questions.

We were studying ChatGPT for quite a while… we have built over 50 AI use cases, embedding them with our technology,” CEO Christian Klein said in an interview. Those use cases will be available to customers next month after its annual Sapphire conference, he said.

SAP also has an internal committee with customers, researchers and analysts to check for biases in AI use cases and guard against potential misuse of the technology, Klein said.

Revenue from SAP’s lucrative cloud business grew 24 percent year-on-year, broadly in line with consensus. SAP has already discounted subsidiary Qualtrics’ profits, which it divested last month, from the current earnings report.

For the year, SAP expects non-IFRS operating profit in the range of EUR 8.6 million – EUR 8.9 billion (roughly Rs. 70 crore to Rs. 73 crore), EUR 200 million (roughly Rs. 1,600 crore) less than before. Cloud revenue forecast is seen down by EUR 1.3 billion (roughly Rs. 10,700 crore) to between EUR 14 and EUR 14.4 billion (roughly Rs. 1,14,900 crore to Rs. 1,18,100 crore).

“Underlying guidance is essentially unchanged, although updated to reflect the disposal of Qualtrics,” Jefferies analysts wrote in a client note.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Alphabet to Consolidate Google Brain, DeepMind AI Research Units in Race to Keep Up With Rival ChatGPT

Alphabet is combining Google Brain and DeepMind, as it doubles down on artificial intelligence research in its race to compete with rival systems like OpenAI’s ChatGPT chatbot.

The new division will be led by DeepMind CEO Demis Hassabis and its setting up will ensure “bold and responsible development of general AI”, Alphabet CEO Sundar Pichai said in a blog post on Thursday.

Alphabet said the teams that are being combined have delivered a number of high-profile projects including the transformer, technology that formed the bedrock of some of OpenAI’s own work.

Going forward, the Alphabet staff will work on “multimodal” AI, like OpenAI’s latest model GPT-4, which can respond not only to text prompts but to image inputs as well to generate new content. Google has for decades dominated the search market, with a share of over 80 percent, but Wall Street fears that the Alphabet unit could fall behind Microsoft Corp in the fast-moving AI race. Technology from OpenAI, funded by Microsoft, powers the rival software maker’s updated Bing search engine.

Alphabet announced the launch of Bard in February to take on ChatGPT as well. It lost $100 billion in value on Feb. 8 after Bard shared inaccurate information in a promotional video and a company event failed to dazzle.

Alphabet shares were up 2 percent on Thursday. Earlier this week, it was reported that Alphabet shares fell over 4 percent in premarket trading after a report that said South Korea’s Samsung Electronics was considering replacing Google with Microsoft-owned Bing as the default search engine on its devices.

The report, published by the New York Times over the weekend, underscored the growing challenges Google’s $162-billion (roughly Rs. 13,29,477 crore) a-year search engine business faces from Bing — a minor player that has risen in prominence recently after the integration of the artificial intelligence tech behind ChatGPT.

Google’s reaction to the threat was “panic” as the company earns an estimated $3 billion (roughly Rs. 24,625 crore) in annual revenue from the Samsung contract, the report said, citing internal messages.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google’s Rush to Take Its AI Chatbot Bard Public Led to Ethical Lapses, Employees Say

Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.

One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology’s potential harms. But the November 2022 debut of rival OpenAI’s popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.

That was a markedly faster pace of development for the technology, and one that could have profound societal impact. The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said. The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development, they said.

Google is aiming to revitalize its maturing search business around the cutting-edge technology, which could put generative AI into millions of phones and homes around the world — ideally before OpenAI, with the backing of Microsoft, beats the company to it.

“AI ethics has taken a back seat,” said Meredith Whittaker, president of the Signal Foundation, which supports private messaging, and a former Google manager. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

In response to questions from Bloomberg, Google said responsible AI remains a top priority at the company. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said Brian Gabriel, a spokesperson. The team working on responsible AI shed at least three members in a January round of layoffs at the company, including the head of governance and programs. The cuts affected about 12,000 workers at Google and its parent company.

Google, which over the years spearheaded much of the research underpinning today’s AI advancements, had not yet integrated a consumer-friendly version of generative AI into its products by the time ChatGPT launched. The company was cautious of its power and the ethical considerations that would go hand-in-hand with embedding the technology into search and other marquee products, the employees said.

By December, senior leadership decreed a competitive “code red” and changed its appetite for risk. Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said. Still, it needed to get its ethics teams on board. That month, the AI governance lead, Jen Gennai, convened a meeting of the responsible innovation group, which is charged with upholding the company’s AI principles.

Gennai suggested that some compromises might be necessary in order to pick up the pace of product releases. The company assigns scores to its products in several important categories, meant to measure their readiness for release to the public. In some, like child safety, engineers still need to clear the 100 percent threshold. But Google may not have time to wait for perfection in other areas, she advised in the meeting. “‘Fairness’ may not be, we have to get to 99 percent,” Gennai said, referring to its term for reducing bias in products. “On ‘fairness,’ we might be at 80, 85 percent, or something” to be enough for a product launch, she added.

In February, one employee raised issues in an internal message group: “Bard is worse than useless: please do not launch.” The note was viewed by nearly 7,000 people, many of whom agreed that the AI tool’s answers were contradictory or even egregiously wrong on simple factual queries.

The next month, Gennai overruled a risk evaluation submitted by members of her team stating Bard was not ready because it could cause harm, according to people familiar with the matter. Shortly after, Bard was opened up to the public — with the company calling it an “experiment”.

In a statement, Gennai said it wasn’t solely her decision. After the team’s evaluation she said she “added to the list of potential risks from the reviewers and escalated the resulting analysis” to a group of senior leaders in product, research and business. That group then “determined it was appropriate to move forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” she said.

Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety. Researchers building AI outnumber those focused on safety by a 30-to-1 ratio, the Center for Humane Technology said at a recent presentation, underscoring the often lonely experience of voicing concerns in a large organization.

As progress in artificial intelligence accelerates, new concerns about its societal effects have emerged. Large language models, the technologies that underpin ChatGPT and Bard, ingest enormous volumes of digital text from news articles, social media posts and other internet sources, and then use that written material to train software that predicts and generates content on its own when given a prompt or query. That means that by their very nature, the products risk regurgitating offensive, harmful or inaccurate speech.

But ChatGPT’s remarkable debut meant that by early this year, there was no turning back. In February, Google began a blitz of generative AI product announcements, touting chatbot Bard, and then the company’s video service YouTube, which said creators would soon be able to virtually swap outfits in videos or create “fantastical film settings” using generative AI. Two weeks later, Google announced new AI features for Google Cloud, showing how users of Docs and Slides will be able to, for instance, create presentations and sales-training documents, or draft emails. On the same day, the company announced that it would be weaving generative AI into its health-care offerings. Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.

The challenge of developing cutting-edge artificial intelligence in an ethical manner has long spurred internal debate. The company has faced high-profile blunders over the past few years, including an embarrassing incident in 2015 when its Photos service mistakenly labeled images of a Black software developer and his friend as “gorillas.”

Three years later, the company said it did not fix the underlying AI technology, but instead erased all results for the search terms “gorilla,” “chimp,” and “monkey,” a solution that it says “a diverse group of experts” weighed in on. The company also built up an ethical AI unit tasked with carrying out proactive work to make AI fairer for its users.

But a significant turning point, according to more than a dozen current and former employees, was the ousting of AI researchers Timnit Gebru and Margaret Mitchell, who co-led Google’s ethical AI team until they were pushed out in December 2020 and February 2021 over a dispute regarding fairness in the company’s AI research. Samy Bengio, a computer scientist who oversaw Gebru and Mitchell’s work, and several other researchers would end up leaving for competitors in the intervening years.

After the scandal, Google tried to improve its public reputation. The responsible AI team was reorganized under Marian Croak, then a vice president of engineering. She pledged to double the size of the AI ethics team and strengthen the group’s ties with the rest of the company.

Even after the public pronouncements, some found it difficult to work on ethical AI at Google. One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review. Managers protested that it was getting in the way of their “real work,” the person said.

Those who remained working on ethical AI at Google were left questioning how to do the work without putting their own jobs at risk. “It was a scary time,” said Nyalleng Moorosi, a former researcher at the company who is now a senior researcher at the Distributed AI Research Institute, founded by Gebru. Doing ethical AI work means “you were literally hired to say, I don’t think this is population-ready,” she added. “And so you are slowing down the process.”

To this day, AI ethics reviews of products and features, two employees said, are almost entirely voluntary at the company, with the exception of research papers and the review process conducted by Google Cloud on customer deals and products for release. AI research in delicate areas like biometrics, identity features, or kids are given a mandatory “sensitive topics” review by Gennai’s team, but other projects do not necessarily receive ethics reviews, though some employees reach out to the ethical AI team even when not required.

Still, when employees on Google’s product and engineering teams look for a reason the company has been slow to market on AI, the public commitment to ethics tends to come up. Some in the company believed new tech should be in the hands of the public as soon as possible, in order to make it better faster with feedback.

Before the code red, it could be hard for Google engineers to get their hands on the company’s most advanced AI models at all, another former employee said. Engineers would often start brainstorming by playing around with other companies’ generative AI models to explore the possibilities of the technology before figuring out a way to make it happen within the bureaucracy, the former employee said.

“I definitely see some positive changes coming out of ‘code red’ and OpenAI pushing Google’s buttons,” said Gaurav Nemade, a former Google product manager who worked on its chatbot efforts until 2020. “Can they actually be the leaders and challenge OpenAI at their own game?” Recent developments — like Samsung reportedly considering replacing Google with Microsoft’s Bing, whose tech is powered by ChatGPT, as the search engine on its devices — have underscored the first-mover advantage in the market right now.

Some at the company said they believe that Google has conducted sufficient safety checks with its new generative AI products, and that Bard is safer than competing chatbots. But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.

Teams working on the new AI features have been siloed, making it hard for rank-and-file Googlers to see the full picture of what the company is working on. Company mailing lists and internal channels that were once places where employees could openly voice their doubts have been curtailed with community guidelines under the pretext of reducing toxicity; several employees said they viewed the restrictions as a way of policing speech.

“There is a great amount of frustration, a great amount of this sense of like, what are we even doing?” Mitchell said. “Even if there aren’t firm directives at Google to stop doing ethical work, the atmosphere is one where people who are doing the kind of work feel really unsupported and ultimately will probably do less good work because of it.”

When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings — a stance that has been critiqued by some in the field as a form of marketing — rather than the day-to-day scenarios that already have the potential to be harmful.

El-Mahdi El-Mhamdi, a former research scientist at Google, said he left the company in February over its refusal to engage with ethical AI issues head-on. Late last year, he said, he co-authored a paper that showed it was mathematically impossible for foundational AI models to be large, robust and remain privacy-preserving.

He said the company raised questions about his participation in the research while using his corporate affiliation. Rather than go through the process of defending his work, he said he volunteered to drop the affiliation with Google and use his academic credentials instead.

“If you want to stay on at Google, you have to serve the system and not contradict it,” El-Mhamdi said.

© 2023 Bloomberg LP


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Amazon Releases New Cloud Tools to Help Build Chatbots as AI Competition With Microsoft, Google Heats Up

Amazon.com’s cloud computing division on Thursday released a suite of technologies aimed at helping other companies develop their own chatbots and image-generation services backed by artificial intelligence. 

Microsoft and Alphabet are adding AI chatbots to consumer products like their search engines, but they are also eying another huge market: selling the underlying technology to other companies via their cloud operations. 

Amazon Web Services (AWS), the world’s biggest cloud computing provider, on Thursday jumped into that race with a suite of its own proprietary AI technologies, but it is taking a different approach. 

AWS will offer a service called Bedrock that lets businesses customize what are called foundation models – the core AI technologies that do things like respond to queries with human-like text or generate images from a prompt – with their own data to create a unique model. ChatGPT creator OpenAI, for example, offers a similar service, letting customers fine-tune the models behind ChatGPT to create a custom chatbot. 

The Bedrock service will let customers work with Amazon’s own proprietary foundation models called Amazon Titan, but it will also offer a menu of models offered by other companies. The first third-party options will come from startups AI21 Labs, Anthropic and Stability AI alongside Amazon’s own models. 

The Bedrock service lets AWS customers test-drive those technologies without having to deal with the underlying data center servers that power them. 

“It’s unneeded complexity from the perspective of the user,” Vasi Philomin, vice president of generative AI at AWS, told Reuters. “Behind the scenes, we can abstract that away.” 

Those underlying servers will use a mix of Amazon’s own custom AI chips as well as chips from Nvidia Corp, the biggest supplier of chips for AI work but whose chips have been in tight supply this year. 

“We’re able to land tens of thousands, hundreds of thousands of these chips, as we need them,” Dave Brown, vice president of Elastic Compute Cloud at AWS, said of the company’s custom chips. “It is a release valve for some of the supply-chain concerns that I think folks are worried about.”

© Thomson Reuters 2023


Realme might not want the Mini Capsule to be the defining feature of the Realme C55, but will it end up being one of the phone’s most talked-about hardware specifications? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT Can Resume in Italy if OpenAI Meets Data Watchdog’s Demands by April 30

Italy’s data protection agency set out a list of demands on Wednesday which it said OpenAI must meet by April 30 to address the agency’s concerns over the ChatGPT chatbot and allow the artificial intelligence service to resume in the country.

Almost two weeks ago Microsoft-backed OpenAI took ChatGPT offline in Italy after the authority, known as Garante, temporarily restricted its personal data processing and began a probe into a suspected breach of privacy rules.

In a statement on Wednesday Garante laid out a set of “concrete” demands to be met by the end of this month

“Only in this case.. the authority will suspend the provisional restrictions on the use of the data of Italian users … and ChatGPT will once again become accessible in Italy,” it said.

OpenAI on Thursday welcomed the agency’s move.

“We are happy that the Italian Garante is reconsidering their decision and we look forward to working with them to make ChatGPT available to our customers in Italy again soon,” a spokesperson told Reuters.

Italy was the first western European country to curb ChatGPT, but its rapid development has attracted attention from lawmakers and regulators in several countries.

Many experts say new regulations are needed to govern artificial intelligence (AI) because of its potential impact on national security, jobs and education.

The authority said OpenAI is required to inform users in Italy of “the methods and logic” behind the processing of data necessary for ChatGPT to operate.

The watchdog also asked OpenAI to provide tools to enable people whose data is involved, including non-users, to request the correction of personal data inaccurately generated by the service or its deletion, if a correction is not possible.

OpenAI should also allow non-users to oppose “in a simple and accessible manner” the processing of their personal data to run its algorithms, Garante said.

It also asked the company to introduce by the end of September an age verification system capable of excluding access to users under 13.

Garante said it would continue investigating potential breaches of data protection rules by OpenAI, reserving the right to impose any other measures needed at the end of its ongoing probe.

The Italian move on ChatGPT has piqued the interest of other privacy watchdogs in Europe which are studying whether harsher measures are needed for chatbots and whether to coordinate.

Spain’s data protection agency has asked the European Union’s privacy watchdog to evaluate privacy concerns surrounding ChatGPT.

In February, the Italian regulator banned AI chatbot company Replika from using the personal data of users in Italy, citing risks to minors and emotionally fragile people.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

China’s Payment Association Warns Over Risks of Data Leaks While Using ChatGPT-Like AI Tools

China’s payment & clearing industry association warned on Monday against using Microsoft-backed OpenAI’s ChatGPT and similar artificial intelligence tools due to “risks such as cross-border data leaks.”

“Payment industry staff must comply with laws and rules when using tools such as ChatGPT, and should not upload confidential information related to the country and the finance industry,” the Payment & Clearing Association of China said in a statement on Monday. The association is governed by the China’s central bank.

OpenAI has kept its artificial intelligence-powered chatbot off-limits to users in China, but the app is attracting huge interest in there, with firms rushing to integrate the technology into their products and launch rival solutions.

While residents in China are unable to create OpenAI accounts, virtual private networks and foreign phone numbers are helping some bypass those restrictions to access the chatbox.

Italy has temporarily banned ChatGPT and launched a probe over suspected breaches of privacy rules. Some European countries were studying whether stronger measures were needed.

Excitement in China over the chatbot has helped to fuel a rally in tech, media and telecom (TMT) shares, with analysts cautioning bubble risks.

Economic Daily, a Chinese state media outlet, published a commentary on Monday urging regulators to step up supervision and crackdown on speculation in the sector.

Chinese shares in computer, media and communications equipment tumbled between 3.4 percent and 5.6 percent on Monday.

© Thomson Reuters 2023


The newly launched Oppo Find N2 Flip is the first foldable from the company to debut in India. But does it have what it takes to compete with the Samsung Galaxy Z Flip 4? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Baidu Sues Apple, Other App Developers Over Fake Copies of Ernie Bot App

Chinese search engine giant Baidu has filed lawsuits against “relevant” app developers and Apple over fake copies of its Ernie bot app available on Apple’s App Store.

The company’s artificial intelligence powered Ernie bot, launched last month, has been touted as China’s closest answer to the US-developed chatbot ChatGPT.

Baidu said it had lodged lawsuits in Beijing Haidian People’s Court against the developers behind the counterfeit applications of its Ernie bot and the Apple company.

“At present, Ernie does not have any official app,” Baidu said in a statement late on Friday posted on its official “Baidu AI” WeChat account.

It also posted a photograph of its court filing.

“Until our company’s official announcement, any Ernie app you see from App Store or other stores are fake,” it said.

Apple did not immediately respond to a request for comment.

A Reuters search on Saturday found there were still at least four apps bearing the Chinese-language name of the Ernie bot, all fake, in Apple’s App Store.

The Ernie bot is only available to users who apply for and receive access codes. In its statement, Baidu also warned against people selling access codes.

Baidu unveiled its artificial intelligence-powered chatbot known as Ernie Bot in March this year. Ernie stands short for “Enhanced Representation through Knowledge Integration”. The popularity of ChatGPT, backed by Microsoft, has triggered a frenzied rush among Chinese tech giants and startups alike to develop a rival. 

Ernie Bot initially opened to only a group of users with invitation codes, and companies can apply to embed the bot into their products via Baidu’s cloud platform.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Alibaba Invites Businesses to Test Its ChatGPT-Like AI Chatbot: Report

Tech giant Alibaba is seeking companies to test its Tongyi Qianwen AI chatbot, business publication STAR Market Daily reported on Friday, joining the rush to emulate the explosive success of ChatGPT.

The free-to-use ChatGPT, a large language model (LLM) application created by Microsoft-backed OpenAI, was released to the public last November and can generate articles and essays on demand in response to user prompts.

Alibaba has opened up registration for businesses to conduct testing for its AI application, STAR Market reported without specifying details.

A source close to the matter confirmed to Reuters that the application was an LLM targeted at business users.

Alibaba’s cloud computing division published a teaser on Friday, posting a message on social media simply stating: “Hello, my name is Tongyi Qianwen, this is our first time meeting, I welcome your feedback.”

The official website for the chatbot application merely has boxes to enter phone numbers and email addresses to request an invitation but provides no specific details about its exact use.

Alibaba Cloud did not respond immediately to an emailed request for comment.

A formal launch is expected at an Alibaba Cloud event on Tuesday. Daniel Zhang, CEO of Alibaba Group as well as the company’s cloud division, is scheduled to speak at the event.

Others to have joined the AI chatbot race include Baidu, with its Ernie Bot application open only to trial users at the moment.

On Saturday network gear maker Huawei Technologies is due to stage an event unveiling Pangu, its natural language processing (NLP) AI model.

SenseTime is also holding an event next week to showcase “cutting-edge advancements in artificial intelligence software”.

Last week Alibaba announced that it will restructure into six standalone divisions, each with its own board and CEO. Zhang is set to stay as the cloud division’s CEO.

© Thomson Reuters 2023
 


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version