QX Lab AI Launches Ask QX, a Node-Based Hybrid Generative AI Platform

Ask QX, a node-based hybrid generative artificial intelligence (AI) platform, which is trained on both large language model (LLM) as well as neural network architecture, has been launched by the Dubai-headquartered QX Lab AI. The platform is being made available in more than 100 global languages, among which 12 are Indian languages. The AI firm has claimed that at the time of launch, the platform has eight million registered users on the platform. The AI platform will be available in both free and paid versions.

QX Lab AI claimed that the platform is the world’s first hybrid AI system, although we could not substantiate the claim. The hybrid nature comes as the chatbot is trained on both LLM and neural networks. Neural network architecture, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), is a subset of machine learning. It uses various node layers to ensure the data sent is within a specific threshold. Essentially, it improves the accuracy of the output as well as speeds up the output generation speed.

As per the company, 70 percent of the AI platform is trained on ANN and 30 percent is trained on LLM. This enables Ask QX to both improve natural language processing (NLP) that helps in generating text and improve accuracy, which has been a consistent issue with chatbots trained on LLMs. The AI platform was trained with 372 billion parameters, which is around 6 trillion tokens, as per the company.

At launch, Ask QX will be available in text and audio format, making it a multimodal model, and the company claims that image and video generation capabilities will be added to it by March 2024. Notably, so far, no AI chatbot offers all of these features together. QX Lab AI claims that the hybrid AI model reduces overall computational power costs and amps up platform security. This infrastructure results in creating an energy-efficient system which also protects from potential data breaches.

QX Lab AI has also revealed that the AI platform will be launched with the support of over 100 languages out of which 12 are Indian languages. These are Hindi, Bengali, Telugu, Marathi, Tamil, Urdu, Gujarati, Kannada, Malayalam, Odia, Punjabi, and Assamese.

Ask QX will be available in two versions. The free version will give users access to the Ask QX gen AI neural engine, whereas the paid version which is aimed at enterprise clients will be based on a higher neural network. The company says the enterprise version is targeted at a wide range of sectors including healthcare, education, legal services and more.

Ask QX is available in India and can be accessed in both web version and as an app for Android on the Play Store. The company said that an iOS app will soon follow.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta to Launch AI-Powered Chatbots With Different Personalities by September: Report

Meta Platforms is preparing to launch a range of artificial intelligence (AI) powered chatbots that exhibit different personalities as soon as September, the Financial Times reported on Tuesday.

Meta has been designing prototypes for chatbots that can have humanlike discussions with its users, as the company attempts to boost its engagement with its social media platforms, according to the report, citing people with knowledge of the plans.

The Menlo Park, California-based social media giant is even exploring a chatbot that speaks like Abraham Lincoln and another that advises on travel options in the style of a surfer, the report added. The purpose of these chatbots will be to provide a new search function as well as offer recommendations.

The report comes as Meta executives are focusing on boosting retention on its new text-based app Threads, after the app lost more than half of its users in the weeks following its launch on July 5.

Meta did not immediately respond to a Reuters request for comment.

The Facebook parent reported a strong rise in advertising revenue in its earnings last week, forecasting third-quarter revenue above market expectations.

The company has been climbing back from a bruising 2022, buoyed by hype around emerging AI technology and an austerity drive in which it has shed around 21,000 employees since last fall.

Bloomberg News reported in July that Apple is working on AI offerings similar to OpenAI’s ChatGPT and Google’s Bard, adding that it has built its own framework, known as ‘Ajax’, to create large language models and is also testing a chatbot that some engineers call ‘Apple GPT’.

© Thomson Reuters 2023


Samsung launched the Galaxy Z Fold 5 and Galaxy Z Flip 5 alongside the Galaxy Tab S9 series and Galaxy Watch 6 series at its first Galaxy Unpacked event in South Korea. We discuss the company’s new devices and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Said to Have Warned Employees Against Using Confidential Information on AI Chatbot

Alphabet is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, four people familiar with the matter told Reuters.

The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information.

The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.

Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate, some of the people said. 

Asked for comment, the company said Bard can make undesired code suggestions, but it helps programmers nonetheless. Google also said it aimed to be transparent about the limitations of its technology.

The concerns show how Google wishes to avoid business harm from software it launched in competition with ChatGPT. At stake in Google’s race against ChatGPT’s backers OpenAI and Microsoft  are billions of dollars of investment and still untold advertising and cloud revenue from new AI programs.

Google’s caution also reflects what’s becoming a security standard for corporations, namely to warn personnel about using publicly-available chat programs.

A growing number of businesses around the world have set up guardrails on AI chatbots, among them Samsung, Amazon.com and Deutsche Bank, the companies told Reuters. Apple, which did not return requests for comment, reportedly has as well. 

Some 43 percent of professionals were using ChatGPT or other AI tools as of January, often without telling their bosses, according to a survey of nearly 12,000 respondents including from top US-based companies, done by the networking site Fishbowl.

By February, Google told staff testing Bard before its launch not to give it internal information, Insider reported. Now Google is rolling out Bard to more than 180 countries and in 40 languages as a springboard for creativity, and its warnings extend to its code suggestions.

Google told Reuters it has had detailed conversations with Ireland’s Data Protection Commission and is addressing regulators’ questions, after a Politico report Tuesday that the company was postponing Bard’s EU launch this week pending more information about the chatbot’s impact on privacy. 

Worries about sensitive information

Such technology can draft emails, documents, even software itself, promising to vastly speed up tasks. Included in this content, however, can be misinformation, sensitive data or even copyrighted passages from a Harry Potter novel.

A Google privacy notice updated on June 1 also states: “Don’t include confidential or sensitive information in your Bard conversations.”

Some companies have developed software to address such concerns. For instance, Cloudflare, which defends websites against cyberattacks and offers other cloud services, is marketing a capability for businesses to tag and restrict some data from flowing externally.

Google and Microsoft also are offering conversational tools to business customers that will come with a higher price tag but refrain from absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users’ conversation history, which users can opt to delete. 

It “makes sense” that companies would not want their staff to use public chatbots for work, said Yusuf Mehdi, Microsoft’s consumer chief marketing officer.

“Companies are taking a duly conservative standpoint,” said Mehdi, explaining how Microsoft’s free Bing chatbot compares with its enterprise software. “There, our policies are much more strict.”

Microsoft declined to comment on whether it has a blanket ban on staff entering confidential information into public AI programs, including its own, though a different executive there told Reuters he personally restricted his use.

Matthew Prince, CEO of Cloudflare, said that typing confidential matters into chatbots was like “turning a bunch of PhD students loose in all of your private records.”

© Thomson Reuters 2023


Apple’s annual developer conference is just around the corner. From the company’s first mixed reality headset to new software updates, we discuss all the things we’re looking forward to seeing at WWDC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

EU Lawmakers Struggle to Finalise Law to Regulate ChatGPT and Generative AI

As recently as February, generative AI did not feature prominently in EU lawmakers’ plans for regulating generative artificial intelligence (AI) technologies such as ChatGPT.

The bloc’s 108-page proposal for the AI Act, published two years earlier, included only one mention of the word “chatbot.” References to AI-generated content largely referred to deepfakes: images or audio designed to impersonate human beings.

By mid-April, however, members of European Parliament (MEPs) were racing to update those rules to catch up with an explosion of interest in generative AI, which has provoked awe and anxiety since OpenAI unveiled ChatGPT six months ago.

That scramble culminated on Thursday with a new draft of the legislation which identified copyright protection as a core piece of the effort to keep AI in check.

Interviews with four lawmakers and two other sources close to discussions reveal for the first time how over just 11 days this small group of politicians hammered out what could become landmark legislation, reshaping the regulatory landscape for OpenAI and its competitors.

The draft bill is not final and lawyers say it will likely take years to come into force.

The speed of their work, though, is also a rare example of consensus in Brussels, which is often criticised for the slow pace of decision-making.

Last-minute changes

Since launching in November, ChatGPT has become the fastest growing app in history, and sparked a flurry of activity from Big Tech competitors and investment in generative AI startups like Anthropic and Midjourney.

The runaway popularity of such applications led EU industry chief Thierry Breton and others to call for regulation of ChatGPT-like services.

An organisation backed by Elon Musk, the billionaire CEO of Tesla and Twitter, took it up a notch by issuing a letter warning of existential risk from AI and calling for stricter regulations.

On April 17, the dozen MEPs involved in drafting the legislation signed an open letter agreeing with some parts of Musk’s letter and urged world leaders to hold a summit to find ways to control the development of advanced AI.

That same day, however, two of them — Dragos Tudorache and Brando Benifei — proposed changes that would force companies with generative AI systems to disclose any copyrighted material used to train their models, according to four sources present at the meetings, who requested anonymity due to the sensitivity of the discussions.

That tough new proposal received cross-party support, the sources said.

One proposal by conservative MEP Axel Voss — forcing companies to request permission from rights holders before using the data — was rejected as too restrictive and something that could hobble the emerging industry.  

After thrashing out the details over the next week, the EU outlined proposed laws that could force an uncomfortable level of transparency on a notoriously secretive industry.

“I must admit that I was positively surprised on how we converged rather easily on what should be in the text on these models,” Tudorache told Reuters on Friday.

“It shows there is a strong consensus, and a shared understanding on how to regulate at this point in time.”

The committee will vote on the deal on May 11 and if successful, it will advance to the next stage of negotiation, the trilogue, where EU member states will debate the contents with the European Commission and Parliament.

“We are waiting to see if the deal holds until then,” one source familiar with the matter said.

Big Brother vs the Terminator

Until recently, MEPs were still unconvinced that generative AI deserved any special consideration.

In February, Tudorache told Reuters that generative AI was “not going to be covered” in-depth. “That’s another discussion I don’t think we are going to deal with in this text,” he said.

Citing data security risks over warnings of human-like intelligence, he said: “I am more afraid of Big Brother than I am of the Terminator.”

But Tudorache and his colleagues now agree on the need for laws specifically targeting the use of generative AI.

Under new proposals targeting “foundation models,” companies like OpenAI, which is backed by Microsoft, would have to disclose any copyrighted material — books, photographs, videos and more — used to train their systems.

Claims of copyright infringement have rankled AI firms in recent months with Getty Images suing Stable Diffusion for using copyrighted photos to train its systems. OpenAI has also faced criticism for refusing to share details of the dataset used to train its software.

“There have been calls from outside and inside the Parliament for a ban or classifying ChatGPT as high-risk,” said MEP Svenja Hahn. “The final compromise is innovation-friendly as it does not classify these models as ‘high risk,’ but sets requirements for transparency and quality.”

© Thomson Reuters 2023


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Alphabet CEO Sundar Pichai Reaps $226 Million Compensation in 2022 Amid Layoffs

Alphabet Chief Executive Sundar Pichai received total compensation of about $226 million (roughly Rs. 1,850 crore) in 2022, more than 800 times the median employee’s pay, the company said in a securities filing on Friday.

Pichai’s compensation included stock awards of about $218 million (roughly Rs. 1,800 crore), the filing showed.

The pay disparity comes at a time when Alphabet, the parent company of Google, has been cutting jobs globally, The Mountain View, California-based company announced plans to cut 12,000 jobs around the world in January, equivalent to 6 percent of its global workforce.

Early this month, hundreds of Google employees staged a walkout at the company’s London offices following a dispute over layoffs.

In March, Google employees staged a walkout at the company’s Zurich offices after more than 200 workers were laid off.

Meanwhile, the company is working rapidly towards making its chatbot Bard stand out among the competitors. On Friday, Google announced that Bard, its generative artificial intelligence (AI) chatbot, to help people write code to develop software, as the tech giant plays catch-up in a fast-moving race on AI technology.

Bard will be able to code in 20 programming languages including Java, C++ and Python, and can also help debug and explain code to users, Google said on Friday.

The company said Bard can also optimise code to make it faster or more efficient with simple prompts such as “Could you make that code faster?”.

Currently, Bard can be accessed by a small set of users who can chat with the bot and ask questions instead of running Google’s traditional search tool.

© Thomson Reuters 2023


Xiaomi launched its camera focussed flagship Xiaomi 13 Ultra smartphone, while Apple opened it’s first stores in India this week. We discuss these developments, as well as other reports on smartphone-related rumours and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Alphabet-Backed Anthropic Releases OpenAI Rival Named Claude

Anthropic, an artificial intelligence company backed by Alphabet, on Tuesday released a large language model that competes directly with offerings from Microsoft-backed OpenAI, the creator of ChatGPT.

Large language models are algorithms that are taught to generate text by feeding them human-written training text. In recent years, researchers have obtained much more human-like results with such models by drastically increasing the amount of data fed to them and the amount of computing power used to train them. 

Claude, as Anthropic’s model is known, is built to carry out similar tasks to ChatGPT by responding to prompts with human-like text output, whether that is in the form of editing legal contracts or writing computer code.

But Anthropic, which was co-founded by siblings Dario and Daniela Amodei, both of whom are former OpenAI executives, has put a focus on producing AI systems that are less likely to generate offensive or dangerous content, such as instructions for computer hacking or making weapons, than other systems.

Such AI safety concerns gained prominence last month after Microsoft said it would limit queries to its new chat-powered Bing search engine after a New York Times columnist found that the chatbot displayed an alter ego and produced unsettling responses during an extended conversation.

Safety issues have been a thorny problem for tech companies because chatbots do not understand the meaning of the words they generate.

To avoid generating harmful content, the creators of chatbots often program them to avoid certain subject areas altogether. But that leaves chatbots vulnerable to so-called “prompt engineering,” where users talk their way around restrictions.

Anthropic has taken a different approach, giving Claude a set of principles at the time the model is “trained” with vast amounts of text data. Rather than trying to avoid potentially dangerous topics, Claude is designed to explain its objections, based on its principles.

“There was nothing scary. That’s one of the reasons we liked Anthropic,” Richard Robinson, chief executive of Robin AI, a London-based startup that uses AI to analyze legal contracts that Anthropic granted early access to Claude, told Reuters in an interview.

Robinson said his firm had tried applying OpenAI’s technology to contracts but found that Claude was both better at understanding dense legal language and less likely to generate strange responses.

“If anything, the challenge was in getting it to loosen its restraints somewhat for genuinely acceptable uses,” Robinson said.

© Thomson Reuters 2023


After facing headwinds in India last year, Xiaomi is all set to take on the competition in 2023. What are the company’s plans for its wide product portfolio and its Make in India commitment in the country? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

 

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version