ChatGPT Integrates Google Drive and Microsoft OneDrive For Paid Users With Connect Apps Feature

ChatGPT is getting a new Connect Apps feature that will allow users to integrate their Google Drive and Microsoft OneDrive with the artificial intelligence (AI) platform. This feature will enable users to eliminate the hassle of downloading documents to their devices and then manually uploading them to the AI platform. However, this feature is only available for the paid users of the chatbot, which includes the ChatGPT Plus, Teams, and Enterprise users. Notably, the AI firm has also begun gradually rolling out GPT-4o to users globally.

The feature was announced by OpenAI via a blog post, where it also introduced a couple more features. All of these features are part of the company’s Spring Update, which also unveiled the GPT-4o AI model with emotive voice and computer vision capabilities. Now, paid users of ChatGPT will get the option to directly upload Google Sheets, Docs, Slides, and Microsoft Excel, Word, and PowerPoint files to ChatGPT with the new Connect Apps feature.

Connect Apps option will let users integrate their primary cloud storage service between Google Drive and Microsoft OneDrive (both enterprise and personal account) with the platform. While the company has not specified, this feature should be available on both the website and the mobile apps. To find the feature, users will have to tap on the paper clip icon located on the left edge of the text field. Tapping on the menu option will open a larger box which will show Google and Microsoft’s cloud storage that can be connected.

Once connected, users can directly upload the files and the AI chatbot will process them. Apart from this, OpenAI is also introducing interactive tables and charts for ChatGPT. Now, when the platform generates a table or chart, users can interact with it and make edits. While they won’t be able to make manual edits at this stage, they can write additional prompts to regroup the tables or to change the colours of a pie chart, and the AI will do it.

After the final chart or table has been prepared, users will also be able to download them. OpenAI says the interactive feature supports several chart types, and in case the user-specified type is unavailable, the chatbot will generate a static chart.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI GPT-4o Begins Rolling Out to Some Users, Gets Web Searching Capability

OpenAI GPT-4o artificial intelligence (AI) model was unveiled on Monday, and it is now being rolled out to some users. The newest flagship-grade AI model by the company introduced significant improvements in the speech and vision capabilities of the chatbot, as well as added a better understanding of the language and context of the queries. For now, users are getting the AI model with limited access and the voice and video features are not available to use. People can, however, use its text and web search capabilities.

Gadgets 360 got access to the GPT-4o model on Friday morning. This confirms that the AI model will be available in India, even when it was not specified during OpenAI’s Spring Update event. However, it was available to only a couple of staff members, so it’s likely the company is rolling out the AI model gradually and it will take a few weeks before everyone can use it. The limited access is also very restrictive. We were able to get about ten questions in before our limit expired, and then we were shifted back to GPT-3.5.

ChatGPT’s GPT-4o – website view

Currently, users cannot do anything to get access faster than others. There is no waitlist to join. However, users will require an OpenAI account to be eligible for this update. Once GPT-4o is available, users will get a message when opening the website which mentions that they can now access it in a limited capacity. If you have the same account on your Android or iOS app, you will get access to the model there as well. Do note, that reloading the page will make the message disappear.

There is an easy test to check whether you have GPT-4o or not (in case you might have missed the message). After opening the ChatGPT website, you can see a collapsible menu on the top left within the margin. If you do not have access to GPT-4o, and you’re a free user, it will show ChatGPT 3.5 and give you the option to sign up for ChatGPT Plus with access to GPT-4. However, if you do have access to the new AI model, the menu will not mention any numbers and simply mention ChatGPT and ChatGPT Plus. Further, the lightning icon is replaced with two intersecting elliptical circles (looks like a minimalist atom icon).

ChatGPT’s GPT-4o – Android app view

We took the new AI model for a spin and found some improvements in its responses. One particular use case was in solving mathematical equations. Compared to GPT-3.5, it now shows the answers in a better format and does not complete multiple steps in one go. Creative generation is also more fluid and the usage of its ‘robotic’ language has reduced significantly. And the biggest upgrade is that this model can search the web to give you the latest information, so you do not have to worry about its knowledge cut-off anymore. Every web-based search result now comes with citations for which website was used to get the information.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Partners Up With Reddit to Bring Its Content to ChatGPT and New AI Products

OpenAI and Reddit have entered into a partnership that will see the artificial intelligence (AI) firm get access to Reddit’s real-time data for ChatGPT and any new AI products the company launches in the future. Reddit will also be able to leverage OpenAI’s technology to bring AI-powered features to its platform. OpenAI will also become an advertiser on the latter’s platform. Notably, the social media platform signed a similar deal with Google in February, reported to be worth $60 million (roughly Rs. 500 crore) a year.

The details of the partnership were shared in a blog post from Reddit. No financial terms, however, were revealed. OpenAI will get access to Reddit’s Data API (application programming interface), which will allow the company to pull real-time content from the platform and use it for ChatGPT and future products. Interestingly, it was not mentioned whether the data will be used to train the AI models or to show up as query results. The latter would not be out of the question given that several recent rumours suggest that OpenAI is working on an AI-powered search engine that could rival Google Search.

Reddit is also benefitting from this deal as the social media platform will get access to OpenAI’s AI models to build features for its platform. The company said it will soon introduce AI features for redditors and the mods (the moderators of subreddits).

Notably, OpenAI made a disclosure in its announcement post, confirming that company CEO Sam Altman was a shareholder in Reddit. The deal, however, was led by COO Brad Lightcap and was approved by its independent Board of Directors. Notably, SEC (Securities and Exchange Commission) filings of Reddit highlight that Altman is the third largest shareholder of the company.

Last year, Reddit had sparked outrage in its community by announcing its intentions to charge for its API, which was kept free ever since the company’s inception. The change was not well received in the community as many third-party apps that used its API could not afford to run, and many subreddits went private to protest the move. However, the company remained firm on its decision. Now, one year later, the API is a commercial tool which has already been sold to Google and OpenAI. The social media platform went public in March 2024.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Samsung Galaxy M35 Design and Colour Options Leaked Ahead of Debut; Bears Striking Resemblance to Galaxy A35



Crypto Price Today: Bitcoin Price Rises Alongside Several Altcoins as Inflation Data Spurs Speculation on Interest Rate Cut



Check out our Latest News and Follow us at Facebook

Original Source

Google Teases Computer Vision, Conversational Capabilities of Gemini AI Ahead of Google I/O Event

Google shared a video on its social media platforms on Monday, teasing new capabilities of its artificial intelligence (AI)-powered chatbot Gemini. The video was released just a day before the company’s annual developer-focused Google I/O event. It is believed that the tech giant could make several announcements around AI and unveil new features and possibly new AI models. Besides that, the centre-stage is likely to be taken by Android 15 and Wear OS 5, which could be unveiled during the event.

In a short video posted on X (formerly known as Twitter), the official account of Google teased new capabilities of its in-house AI chatbot. The 50 second-long video highlighted marked improvements in its speech, giving Gemini a more emotive voice and modulations that gives it a more human-like appearance. Further, the video highlighted new computer vision capabilities. The AI could pick up on the visuals on the screen and analyse it.

Gemini could also access the camera of the smartphone, a capability it does not possess at present. The user was moving the camera across the space and asked the AI to describe what it saw. Almost without any time lag, the chatbot could describe the setting as a stage and when prompted, could even recognise the Google I/O logo and share information around it.

The video shared no further details about the AI, and instead asked people to watch the event to know more. There are some questions that might be answered during the event such as whether Google is using a new large language model (LLM) for computer vision or if it an upgraded version of Gemini 1.5 Pro. Further, Google may also reveal what else can the AI do with its computer vision. Notably, there are rumours that the tech giant might introduce Gems, which are considered to be chatbot agents that can be designed for particular tasks, similar to OpenAI’s GPTs.

While Google’s event is expected to introduce new features to Gemini, OpenAI held its Spring Update event on Monday and unveiled its latest GPT-4o AI model that added features to ChatGPT, similar to the video shared by Google. The new AI model allows it to have a conversational speech, computer vision, real-time language translation, and more.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI GPT-4o With Real-Time Responses and Video Interaction Announced, GPT-4 Features Now Available for Free

OpenAI held its much-anticipated Spring Update event on Monday where it announced a new desktop app for ChatGPT, minor user interface changes to ChatGPT’s web client, and a new flagship-level artificial intelligence (AI) model dubbed GPT-4o. The event was streamed online on YouTube and was held in front of a small live audience. During the event, the AI firm also announced that all the GPT-4 features, which were so far available only to premium users, will now be available to everyone for free.

OpenAI’s ChatGPT desktop app and interface refresh

Mira Murati, the Chief Technical Officer of OpenAI, kickstarted the event and launched the new ChatGPT desktop app, which now comes with computer vision and can look at the user’s screen. Users will be able to turn this feature on and off, and the AI will analyse and assist with whatever is shown. The CTO also revealed that the ChatGPT’s web version is getting a minor interface refresh. The new UI comes with a minimalist appearance and users will see suggestion cards when entering the website. The icons are also smaller and hide the entire side panel, making a larger portion of the screen available for conversations. Notably, ChatGPT can now also access web browser and provide ral-time search results.

GPT-4o features

The main attraction of the OpenAI event was the company’s newest flagship-grade AI model called GPT-4o, where the ‘o’ stands for omni-model. Murati highlights that the new chatbot is twice as fast, 50 percent cheaper, and has five times higher rate limits compared to the GPT-4 Turbo model.

GPT-4o also offers significant improvements in the latency of responses and can generate real-time responses even in speech mode. In a live demo of the AI model, OpenAI showcased that it can converse in real time and react to the user. GPT-4o-powered ChatGPT can now also be interrupted to answer a different question, which was impossible earlier. However, the biggest enhancement in the unveiled model is the inclusion of emotive voices.

Now, when ChatGPT speaks, its responses contain various voice modulations, making it sound more human and less robotic. A demo showed that the AI can also pick up on human emotions in speech and react to them. For instance, if a user speaks in a panicking voice, it will speak in a concerned voice.

Improvements have also been made to its computer vision, and based on the live demos, it can now process and respond to live video feeds from the device’s camera. It can see a user solve a mathematical equation and offer step-by-step guidance. It can also correct the user in real time if he makes a mistake. Similarly, it can now process large coding data and instantaneously analyse it and share suggestions to improve it. Finally, users can now open the camera and speak with their faces visible, and the AI can detect their emotions.

Finally, another live demo highlighted that the ChatGPT, powered by the latest AI model, can also perform live voice translations and speak in multiple languages in quick succession. While OpenAI did not mention the subscription price for access to the GPT-4o model, it highlighted that it will be rolled out in the coming weeks and available as an API.

GPT-4 is now available for free

Apart from all the new launches, OpenAI has also made the GPT-4 AI model, including its features, available for free. People using the free tier of the platform will be able to access features such as GPTs (mini chatbots designed for specific use cases), GPT Store, the Memory feature through which the AI can remember the user and specific information relating to them for future conversations, and its advanced data analytics without paying anything.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Shares Model Spec, a Document Highlighting Its Approach to Building an Ethical AI

OpenAI shared its Model Spec on Wednesday, the first draft of a document that highlights the company’s approach towards building a responsible and ethical artificial intelligence (AI) model. The document mentions a long list of things that an AI should focus on while answering a user query. The items on the list range from benefitting humanity, and complying with laws to respecting a creator and their rights. The AI firm specified that all of its AI models including GPT, Dall-E, and soon-to-be-launched Sora will follow these codes of conduct in the future.

In the Model Spec document, OpenAI stated, “Our intention is to use the Model Spec as guidelines for researchers and data labelers to create data as part of a technique called reinforcement learning from human feedback (RLHF). We have not yet used the Model Spec in its current form, though parts of it are based on documentation that we have used for RLHF at OpenAI. We are also working on techniques that enable our models to directly learn from the Model Spec.”

Some of the major rules include following the chain of command where the developer’s instructions cannot be overridden, complying with applicable laws, respecting creators and their rights, protecting people’s privacy, and more. One particular rule also focused on not providing information hazards. These relate to the information that can create chemical, biological, radiological, and/or nuclear (CBRN) threats.

Apart from these, there are several defaults which have been placed as permanent codes of conduct for any AI model. These include assuming the best intentions from the user or developer, asking clarifying questions, being helpful without overstepping, assuming an objective point of view, not trying to change anyone’s mind, expressing uncertainty, and more.

However, the document is not the only point of reference for the AI firm. It highlighted that the Model Spec will be accompanied by the company’s usage policies which regulate how it expects people to use the API and its ChatGPT product. “The Spec, like our models themselves, will be continuously updated based on what we learn by sharing it and listening to feedback from stakeholders,” OpenAI added.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft MAI-1 AI Model With 500 Billion Parameters Could Soon Be Unveiled: Report

Microsoft is reportedly working on a new artificial intelligence (AI) model dubbed MAI-1, which could be its largest in-house model to date. The company recently created its AI division, which is now headed by Mustafa Suleyman, co-founder of Google DeepMind and former CEO of Inflection AI. It is said that Suleyman is heading the development of the large language model and it can be unveiled soon. Notably, Microsoft launched its open-source small language model (SLM) Phi-3-mini last month as a successor to the Phi-2 model.

According to a report by The Information, the MAI-1 AI model is in its advanced stages of development. However, it is said that its purpose has not been determined yet, and it will depend on its final capabilities. Citing unnamed Microsoft employees familiar with the matter, the report also highlighted that the project is being headed by Suleyman. It is believed that the tech giant could preview the AI model during its annual developer-focused event Microsoft Build which will be held starting May 21.

The MAI-1 AI model reportedly contains 500 billion parameters, which makes it the largest LLM created by the company so far. For comparison, the Phi-3-mini model contained 3.8 billion parameters, whereas GPT-4 comes with one trillion parameters. The larger the number of parameters, the wider the knowledge base of an AI model. It also helps in improving the contextual knowledge window of the chatbot. Since MAI-3 has not been released, no benchmark scores are available to judge its performance compared to the top models.

As per the report, Microsoft did not bring this AI model from Inflection AI and instead built it from scratch in-house. However, it might have been trained using data from the startup. It is unclear if the MAI-1 AI model will also be open-sourced like Phi-3-mini, or will require a subscription fee for access. The tech giant could also use it for its own AI products. These details will likely be shared during the company’s Build event later this month.

Separately, OpenAI is reportedly working on its ChatGPT-powered search engine and has already created a domain and SSL certificate for its first-ever search product. Some rumours have suggested that the search engine could be launched on May 9.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Lenovo Tab K11 With 7,040mAh Battery, MediaTek Helio G88 SoC Launched in India



Bitcoin Crosses One Billion Transactions Milestone First Time Since Its Inception in 2009



Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Tipped to Be Working on a ChatGPT-Powered AI Search Engine to Rival Google Search

OpenAI might be working on an artificial intelligence (AI)-powered search engine that will be based on its ChatGPT. As per a post on a forum, the AI firm might have already created the domain and the Secure Sockets Layer (SSL) certificates for the website, which are necessary to authenticate a website. If this is true, then OpenAI will directly position itself to compete with major players in the segment such as Google Search and Microsoft Bing as well as Perplexity AI, which also has its own AI search engine.

The information comes from a community post on Y Combinator’s Hacker News which claimed, “Search.chatgpt.com domain and SSL cert have been created.” The post was made by a netizen with the handle daolf, and the post had 127 upvotes at the time of writing. We took a look at the posting history of the user and did not find any leaks or rumours, although the tipster does share a significant amount of news articles on the forum. While this suggests that the information might be genuine, there is no way to know for sure.

However, some AI influencers have also begun posting about a ChatGPT-powered search engine on social media. Tipster @nonmayorpete posted on X (formerly known as Twitter) about the speculated search engine and even included the date May 9, suggesting that is the date when the domain will go live.

We tried to access search.chatgpt.com but the web page simply states “Not found”. Since it is a subdomain of chatgpt.com which is already taken by OpenAI, it is not possible to check the status of the subdomain.

An AI-powered search engine is not a new concept, however. Perplexity AI, which launched in 2022, is a popular example of a chatbot-controlled search engine. Users can search their queries and it scours through the internet to find relevant information in text form as well as shows websites that can offer additional information. Notably, Perplexity AI uses Microsoft Bing for its web indexing. If the rumours are true, OpenAI’s search engine product could function similarly.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


OnePlus 13 Alleged Render Suggests Revamped Rear Design With Rounded Corners



Apple Pencil Pro Name Spotted on Japanese Website Ahead of ‘Let Loose’ Event



Check out our Latest News and Follow us at Facebook

Original Source

Anthropic Launches Claude iOS App to Bring the AI Assistant to the iPhone

Anthropic’s artificial intelligence (AI)-powered chatbot Claude is now making its way to the iPhone. The company announced the launch of its iOS app on Wednesday and said it is generally available globally. This is the first time the AI assistant has left the web interface and received a dedicated smartphone app. Alongside, it also announced a new Teams subscription plan for businesses that will allow corporates to purchase Claude’s access for the entire staff. Notably, Anthropic released Claude 3 AI models in March.

In a post on X (formerly known as Twitter), Anthropic announced the launch of the iOS app. The mobile app functions the same way as the web interface, and we found it quite optimised and user-friendly. The iPhone app comes with features such as seamless sync with web chats that allow you to pick up the conversation on the app after leaving it midway on the web interface.

Claude iOS app
Photo Credit: Anthropic

 

The iPhone app also comes with vision capabilities. With permission from the user, the app can access the camera and the photo library of the iPhone to offer real-time analysis of the images. Users can potentially click a picture of an object and ask the AI to identify it.

While the app can be downloaded for free, it does come with the same restrictions as the free web interface of the platform. You only get access to Haiku or Sonnet AI models and there is a daily message limit which varies depending on the load on the server. Notably, there are three variants of Claude AI — Haiku, Sonnet, and Opus. Haiku is the fastest but the least intelligent, Sonnet is slower but more intelligent, and Opus does both.

If you do not want to hit the daily limit, you will have to pay $20 (roughly Rs. 1,700) a month for the Pro subscription. This opens access to all three AI models, a higher number of chats, and priority access during high-traffic periods. Alongside its iOS app, Anthropic has also introduced a new Team plan for enterprises, which businesses can purchase for their staff. The plan offers everything in the Pro tier as well as a higher usage rate than Pro and access to its 200K context window mode, which is designed to process long documents. The Teams plan comes at the price of $30 (roughly Rs. 2,500) per user per month with a minimum of five seats.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Amazon Q AI Assistant Now Available for Enterprise Customers, Amazon Q Apps Out in Preview

Amazon announced the wider availability of its latest artificial intelligence (AI)-powered assistant for enterprise, Amazon Q, on Tuesday. The e-commerce giant first announced its business-focused chatbot in November 2023, promising both generative and analytical assistance based on the business’ in-house data. Later, Amazon made the AI tool available to a limited number of users. Now, the company has made it generally available and has added some new features to it. Notably, enterprises will now get two separate chatbots — Amazon Q Developer and Amazon Q Business — for separate sets of tasks.

In a newsroom post, Amazon’s cloud computing platform Amazon Web Services (AWS) announced that both Amazon Q Developer and Business chatbots are now available for enterprises. This will be available to those customers who use AWS services. The AI bot trains and learns from the business’ data and workflows and can help with more coding-related and business analytics-related tasks.

The Amazon Q Developer is designed as an assistant for software developers and it can help them by “performing tedious and repetitive tasks” such as coding, testing, upgrading applications, troubleshooting, security scanning and fixes as well as optimising AWS resources. The company says the saved time can be used by professionals to develop unique experiences for end users and deploy them faster.

For analytical assistance, the tech giant is making its Amazon Q Business available. This tool is designed to accept and process natural language queries and prompts. Based on the knowledge from the business’ database stored in AWS servers, it can answer questions, generate summaries and content, write reports, and provide data-driven analysis. Amazon Q is also being added to Amazon QuickSight, AWS’s unified Business Intelligence service for the cloud.

Apart from these, Amazon also has another feature which is not yet available to all users. Dubbed Amazon Q Apps, it allows employees to build AI-powered apps based on the company’s data. Even employees who have no prior knowledge of coding can build apps through it by providing a simple description of the app and the tasks it should perform. The AI chatbot can then create the app end-to-end. This feature is currently in preview.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version