AI Archives - Technowize https://www.technowize.com/technology/ai/ Wise Word on Technology and Innovations Fri, 29 Nov 2024 09:24:24 +0000 en-US hourly 1 https://www.technowize.com/wp-content/uploads/2020/04/favicon-32x32-1.png AI Archives - Technowize https://www.technowize.com/technology/ai/ 32 32 Anthropic’s Model Context Protocol: Claude AI’s Data Integration Enhanced https://www.technowize.com/anthropics-model-context-protocol-claude-ais-data-integration-enhanced/ https://www.technowize.com/anthropics-model-context-protocol-claude-ais-data-integration-enhanced/#respond Fri, 29 Nov 2024 09:24:24 +0000 https://www.technowize.com/?p=43154 The Anthropic Model Context Protocol aims to eliminate the silos of data AI models rely on, with a more integrated system of data access for all.

The post Anthropic’s Model Context Protocol: Claude AI’s Data Integration Enhanced appeared first on Technowize.

]]>
AI systems are thriving but there’s one factor that unites all of them—their abilities are trapped behind silos of data that require constant updating. To address this issue, AI company Anthropic is open-sourcing the Model Context Protocol, which aims to improve the ways AI models interact with data sources. 

The company believes this is a surefire way to allow AI tools to provide more relevant responses to queries by “connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments.” Its AI assistant Claude can now support better AI data integration by replacing the scattered access to data with a more unified system of operation. 

When paired with the other desktop updates for Claude AI and its enhanced PDF analysis capabilities, the AI bot is on the way to becoming a leader in its category.

Claude AI data integration

Image: Pexels

Anthropic’s Model Context Protocol—Why Do We Need a New Operation Mode?

To understand the Anthropic Model Context Protocol, we must first decipher what modern-day AI models are missing. AI assistants are gaining popularity every day as they become more capable of providing accurate, realistic responses, however, the large majority of them are significantly constrained by the quality and quantity of data they can access. 

To keep models up-to-date, they need to be manually reconnected to new data sources as the models usually operate in silos that are cut off from access to the wealth of information that is stored in separate databases. Integrating new data sources is a constant challenge as the different data sources have their own protocols that cause the AI models—and their developers—to expend a lot of energy on reconfiguring systems each time. These processes are taxing and time-consuming.

The lack of standardization may seem like a minor problem now, but as AI systems continue to expand and data sources continue to multiply, the fragmented processes could be a serious limitation of AI functioning later down the line.

How does the Anthropic Model Context Protocol Solve the Issue?

The Model Context Protocol (MCP) is an open-standard tool for developers to build a two-way connection between the many data sources and the AI tools, not just for Claude AI’s data integration but for other AI as well. Anthropic is looking into establishing a universal protocol that can connect AI systems to myriad data sources by simplifying the integration process, regardless of the technologies that underlie them.

There are three layers to the tools provided by Anthropic for this purpose. The company has provided MCP specifications and SDKs as a way to establish the guidelines necessary for developers to build these servers themselves. Anthropic has also provided local MCP server support in the Claude Desktop apps for those who want to create repositories with their own private data for local use, allowing small teams to leverage AI with their data.

Finally, the Anthropic Model Context Protocol has pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer, so developers can start utilizing these immediately. 

According to Anthropic, “Early adopters like Block and Apollo have integrated MCP into their systems, while development tools companies including Zed, Replit, Codeium, and Sourcegraph are working with MCP to enhance their platforms—enabling AI agents to better retrieve relevant information to understand the context around a coding task further and produce more nuanced and functional code with fewer attempts.”

Desktop Updates for Claude AI

Apart from Claude AI’s data integration advancements, another major announcement from the company is the arrival of the desktop app for Mac and Windows users. The freely available tool makes it easier for users to access the AI without having to search for the website each time.

The Claude AI desktop features resemble the content of the webpage but have the added capacity to work with local desktop applications and set up repetitive tasks. This takes us one step closer to Anthropic’s goals to build an AI that can be fully integrated into a user’s daily routine, establishing compatibility with other services they use regularly.

Anthropic has also worked on Claude’s PDF analysis capabilities so users can summarize or review documents more in-depth. This should make it easier for users to pull data from their documents or refer back to key areas of interest quickly. Users of the smartphone app also have something to look forward to with the Claude AI’s diction support, which now provides the option of using voice commands for queries.

The AI space has a handful of big players and all of them have their own goals. OpenAI is now trying to capture the space that Gemini dominates through the Google Search Engine. Google Gemini, on the other hand, is exploring new spaces with its integration into the Spotify ecosystem of services. Anthropic’s pursuit of the MCP integration is admittedly less flashy, but considerably more groundbreaking in its goals of standardizing how AI models operate.

The post Anthropic’s Model Context Protocol: Claude AI’s Data Integration Enhanced appeared first on Technowize.

]]>
https://www.technowize.com/anthropics-model-context-protocol-claude-ais-data-integration-enhanced/feed/ 0
OpenAI Plans to Develop Web Browser, Becoming a Rival to Google https://www.technowize.com/openai-plans-to-develop-web-browser-becoming-a-rival-to-google/ https://www.technowize.com/openai-plans-to-develop-web-browser-becoming-a-rival-to-google/#respond Mon, 25 Nov 2024 04:38:31 +0000 https://www.technowize.com/?p=43112 OpenAI is considering launching a search browser to challenge traditional search engines, boosting its global reach.

The post OpenAI Plans to Develop Web Browser, Becoming a Rival to Google appeared first on Technowize.

]]>
OpenAI is considering developing a search browser to expand its reach in the technology market. The company plans to power search features in its chatbot and allow partners to include a variety of eCommerce websites and apps. ChatGPT is one of the most dominant artificial intelligence chatbots in the market, and developing a web search browser will boost its reach globally.

As reported by The Information, OpenAI partnerships with different websites and apps will help in more accurate answers and searches for users. Artificial Intelligence is already changing traditional methods of web searches and online shopping with its more personalized options. AI search tools use natural language processing, machine learning, and user data to better understand complex questions and provide personalized answers and recommendations.

OpenAI browser challenge Google

OpenAI’s Plans for a 2024 Browser Launch

On October 31, OpenAI revealed an update to ChatGPT, which integrates web search options that could challenge the traditional search engines, the update gives direct answers to the searches with the citation for the source of information. This new feature was rolled out for ChatGPT Plus and Team subscribers, which provides all the information from stock prices to weather forecasts.

During the feature announcement, OpenAI said, “Getting useful answers on the web can take a lot of effort, it often requires multiple searches and digging through links to find quality sources and the right information.”

On October 8, OpenAI and Hearst formed a content partnership to integrate a publishing company’s newspaper and magazine articles into OpenAI’s generative AI products. This will provide access to ChatGPT users to access content from Hearst publications, with the source citations leading users to original Hearst articles.

ChatGPT now has 250 million active users each week, with around 5% to 6% of free users upgrading to the paid version, according to OpenAI CFO Sarah Friar in an interview with Bloomberg TV on Oct. 28. Meanwhile, OpenAI is also focusing on attracting more corporate clients and is enthusiastic about the potential growth in that area.

OpenAI’s Potential Impact on the Stock Market

OpenAI and its main investor, Microsoft, are in talks about how to split the benefits when OpenAI becomes a for-profit company. However, agreeing on the right market value for these assets is proving to be a challenging task.

According to legal experts, it all depends on the person who will be doing the math for assets. Angela Lee, a Professor at the Columbia University Business School said, “The issue is that there are probably six to 10 different ways to value a company, And depending on who you ask, and my guess is, depending on which model you use, you could be off by a factor of like 3x to 5x.”

The post OpenAI Plans to Develop Web Browser, Becoming a Rival to Google appeared first on Technowize.

]]>
https://www.technowize.com/openai-plans-to-develop-web-browser-becoming-a-rival-to-google/feed/ 0
SoftBank Gets First Nvidia Blackwell Superchips https://www.technowize.com/softbank-gets-first-nvidia-blackwell-superchips/ https://www.technowize.com/softbank-gets-first-nvidia-blackwell-superchips/#respond Wed, 13 Nov 2024 13:08:23 +0000 https://www.technowize.com/?p=43019 SoftBank first to receive Nvidia Blackwell chips for its new Supercomputer.

The post SoftBank Gets First Nvidia Blackwell Superchips appeared first on Technowize.

]]>
SoftBank to get first Nvidia Blackwell chips for groundbreaking new supercomputers in Japan. This move comes as SoftBank’s CEO, Masayoshi Son, aims to capitalize on the AI boom. Additionally, SoftBank is also planning to use Nvidia’s Grace Blackwell Chips for its other supercomputer. The announcement was made during the AI event in Japan where both the company’s CEO were present.

The Japanese company, SoftBank will use its DGX SuperPOD supercomputer, which is powered by the Nvidia’s Blackwell-based DGX B200 system, to boost its AI development. SoftBank’s supercomputer will support AI projects for several institutions, universities, research centers, and government facilities.

Nvidia SoftBank collaboration 2024

Image Credit – Bloomberg (Nvidia CEO on the left and SoftBank CEO on the right)

SoftBank Receives Nvidia’s First Blackwell Chip

SoftBank is the first to receive Nvidia’s Blackwell chip and plans to build Japan’s most powerful supercomputer. This can be seen as a risky move as Japan already ranks in the top 10 countries in supercomputing. The new supercomputer will be used to develop generative AI and will be accessible to research institutions and businesses across Japan once it’s finished.

Recently, SoftBank CEO Masayoshi Son has increased his focus on AI. Last summer, SoftBank acquired British AI chipmaker Graphcore. Additionally, there were also rumors in September that the company is planning to invest millions in OpenAI.

SoftBank and Nvidia Unveil AI-Powered 5G Network Revolution

SoftBank has successfully revealed the world’s first AI-powered 5G network, called AI-RAN, powered by Nvidia’s AI Aerial computing platform. This is a major step towards unlocking billions of dollars in AI revenue opportunities for telecom operators.

Nvidia and SoftBank estimate that telecom companies could earn $5 in AI-related revenue for every $1 they invest in AI-RAN infrastructure. SoftBank is expecting to see a return of up to 219% for each AI-RAN server added to its network.

Nvidia-SoftBank Collaboration 2024: Powering the Future of AI and 5G

The announcement of the collaboration was made at Nvidia’s AI summit in Japan in the presence of SoftBank CEO, Masayoshi Son. At the event, both the CEOs had a “fireside chat,” where Jensen Huang, CEO of Nvidia, shared a funny incident. He recalled how Son offered him some money so he could buy all of the Nvidia because the market didn’t understand the company’s value. Huang said, “He wanted to lend me money to buy all of Nvidia. Now, I wish I had taken it.”

Nvidia, originally known for making graphic chips for gaming, has now become one of the most valuable companies in the world, because of the high demand for chips.

The collaboration between SoftBank and Nvidia marks an exciting new chapter for both companies, as they push the boundaries of AI, supercomputing, and 5G technology. With SoftBank leading the way in Japan with its new AI-powered supercomputers and groundbreaking 5G network, the partnership is set to unlock incredible opportunities for innovation and growth. 

As both companies continue to focus on AI, their combined efforts could shape the future of technology and help revolutionize industries worldwide. Whether it’s powering supercomputers, developing generative AI, or transforming telecom networks, SoftBank and Nvidia are clearly at the forefront of the next tech revolution.

The post SoftBank Gets First Nvidia Blackwell Superchips appeared first on Technowize.

]]>
https://www.technowize.com/softbank-gets-first-nvidia-blackwell-superchips/feed/ 0
Google Open-Source SynthID is Now Watermarking AI-Generated Text https://www.technowize.com/google-open-source-synthid-is-now-watermarking-ai-generated-text/ https://www.technowize.com/google-open-source-synthid-is-now-watermarking-ai-generated-text/#respond Thu, 24 Oct 2024 19:49:42 +0000 https://www.technowize.com/?p=42836 Google DeepMind open-sources its AI watermarking tool SynthID as it adds a text recognition feature.

The post Google Open-Source SynthID is Now Watermarking AI-Generated Text appeared first on Technowize.

]]>
AI Generator models have flooded the digital world with AI-generated content including videos, images, designs, text, and music. While chatbots provide all types of content, some of the tools also provide an option to humanize AI-generated multimedia. Google is approaching this issue with a different motive than others as it has open-sourced the SynthID to watermark AI-generated text.

SynthID is an AI watermarking tool of Google DeepMind which is now available to watermark the AI-generated text. Previously, the tool was only able to watermark AI-generated images, videos, and music and was available for limited people. In May, Google applied the SynthID on its Gemini app and other chatbots to have feedback on the performance of the tool.

Pushmeet Kohli, vice president of research at Google DeepMind told MIT Technology Review, “Now, other [generative] AI developers will be able to use this technology to help them detect whether text outputs have come from their own [large language models], making it easier for more developers to build AI responsibly.”

Watermark on AI-generated text

How does SynthID identify AI-generated text?

Google has open-sourced the tool, which has already been integrated with the Gemini chatbot. Developers and businesses can now use the tool to determine whether the text output has come from their AI-generator chatbots. Currently, only Google and the developer with access to a detector that identifies the watermark can use the tool.

SynthID works by recognizing the tokens used by LLMs in the text output. LLM is a Large Language Model which supports chatbots, it generates text with one token at a time. To generate a sequence of texts, the model predicts the next token for the text. These tokens can represent a character, word, or phrase. 

LLM makes predictions based on the previous words and the probability score is assigned to each token for the next text. The whole process is repeated throughout the generated text, which allows a single sentence to contain ten or more probability scores. The final pattern of scores, which combines the model’s word choices with the adjusted probability scores, is referred to as the watermark.

The accuracy of SynthID increases with the length of the generated text as it contains a large number of probability scores. Kohli said, “While SynthID isn’t a silver bullet for identifying AI-generated content, it is an important building block for developing more reliable AI identification tools.”

Even after the million prompts test, researchers have alleged that it’s easy to alter the Gemini-generated texts and fool the detector. However, it’s hard for common users to understand the right way to alternate the text or to identify the particular words that need to be changed. There can be many loops in the SynthID too but Google claims it to be the most accurate as of now.

The post Google Open-Source SynthID is Now Watermarking AI-Generated Text appeared first on Technowize.

]]>
https://www.technowize.com/google-open-source-synthid-is-now-watermarking-ai-generated-text/feed/ 0
Qualcomm Teams with Alphabet’s Google for Custom Automotive AI Solutions https://www.technowize.com/qualcomm-teams-with-alphabets-google-for-custom-automotive-ai-solutions/ https://www.technowize.com/qualcomm-teams-with-alphabets-google-for-custom-automotive-ai-solutions/#respond Thu, 24 Oct 2024 08:27:56 +0000 https://www.technowize.com/?p=42823 Qualcomm on Tuesday said it was teaming up with Alphabet's Google to offer a combination of chips and software. This Qualcomm Alphabet  automotive partnership will let automakers develop their own AI voice assistants using technology from the two firms. Qualcomm's chips have long powered mobile phones with Google's Android operating system and the company has expanded into the automotive business. This chips that can power both a car's dashboard and automated driving systems that are used by General Motors and others.

The post Qualcomm Teams with Alphabet’s Google for Custom Automotive AI Solutions appeared first on Technowize.

]]>
Qualcomm on Tuesday said it was teaming up with Alphabet’s Google to offer a combination of chips and software. This Qualcomm Alphabet  automotive partnership will let automakers develop their own AI voice assistants using technology from the two firms. Qualcomm’s chips have long powered mobile phones with Google’s Android operating system and the company has expanded into the automotive business.

Qualcomm Teams with Alphabet's Google for Custom Automotive AI Solutions

(Image Credit: qualcomm)

This Qualcomm AI chips shall power both a car’s dashboard and automated driving systems that are used by General Motors and others. On Tuesday, Qualcomm said it is working with Google to create a version of the company’s Android Automotive OS that will run smoothly on Qualcomm chips.

Google’s Android Automotive OS

Google’s Android Automotive OS is an offering that automakers use behind the scenes to power a vehicle computing systems. Google and Qualcomm and said automakers will be able to use the joint offering and Google’s AI technology to create voice assistants that are unique to an automaker and can work without relying on a driver’s phone.

Qualcomm and Alphabet partnership

Qualcomm has established a strategic alliance with Alphabet’s Google to supply automakers with powerful AI voice assistants based on the firms’ combined semiconductor and software technology. This Qualcomm Alphabet AI deal will allow manufacturers to create their own personalized voice assistants by combining Qualcomm Snapdragon CPUs and Google AI technology into their vehicle systems.

Qualcomm on AI partnership with Alphabet

Qualcomm and Google announced that automakers will be able to use their joint offering. Along with Google’s AI technology to create unique voice assistants that operate independently of the driver’s phone.

Nakul Duggal, Qualcomm’s group manager for automotive, explained that while Qualcomm and Google have typically operated together but independently, they plan to rethink their collaboration to reduce friction and confusion for customers.

Mercedes and Qualcomm chip deal

In addition to the Alphabet Google partnership, Qualcomm has unveiled two new chips for automotive applications: Snapdragon Cockpit Elite and Snapdragon Ride Elite. The Snapdragon Cockpit Elite is designed to power vehicle dashboards and improve interface with vehicles. The Snapdragon Ride Elite will offer autonomous driving features. Mercedes-Benz plans to use the Snapdragon Cockpit Elite chip in future vehicles. Qualcomm nor Mercedes have specified when or in which models the chips would be used.

The post Qualcomm Teams with Alphabet’s Google for Custom Automotive AI Solutions appeared first on Technowize.

]]>
https://www.technowize.com/qualcomm-teams-with-alphabets-google-for-custom-automotive-ai-solutions/feed/ 0
Meta’s Video Generative AI Is Set to Master Realistic Video Creation https://www.technowize.com/metas-video-generative-ai-is-set-to-master-realistic-video-creation/ https://www.technowize.com/metas-video-generative-ai-is-set-to-master-realistic-video-creation/#respond Tue, 08 Oct 2024 10:17:16 +0000 https://www.technowize.com/?p=42704 On Friday, Meta introduced a preview of MovieGen, an AI model designed to create realistic videos and insert changes to the existing ones.

The post Meta’s Video Generative AI Is Set to Master Realistic Video Creation appeared first on Technowize.

]]>
Meta announced the preview of one of its latest AI models, a tool designed to create realistic videos, audio, and images. As per Meta, MovieGen is expected to outperform its rival models when it comes to the creation and alteration of multimedia, however, Meta’s MovieGen AI launch does not have an official release date yet. 

The company said, “This AI model will help people enhance their artistic capabilities instead of replacing human artists,” which is a detail that we will observe closely after the release of MovieGen. One of the major highlights of this model is its ability to create a realistic video through the prompt of a single image. MovieGen can create videos for the described scenario and make anyone display any action with just a photograph.

Meta AI deep fake system

Image: Meta

Meta MovieGen: Exploring How it Works 

Earlier, Meta worked on its video and image AI models separately. The Make-a-scene video generator and Emu image synthesis model were the first steps Meta took toward the creation of image and video generator AI. Now, building on its earlier models, Meta is set to launch the video generator AI MovieGen for its audiences. This AI model will be able to create, edit, and alter videos, audio, and images. For the first time, the video generator AI will have autogenerated audio as required by the user or suitable for the video.

The users will be able to edit or add any changes to the existing video. Additionally, users can also create realistic videos from just an image of the person, an ability that has its pros and cons. The videos will be of high quality and the AI can generate clips of 16 seconds at the rate of 16 frames per second.  

Meta has stated, “In human preference tests, MovieGen has stood out from OpenAI’s Sora, Runway Gen 3, and Chinese video AI mode Kling.” The company also claims that complex changes like the movement of objects, camera movement, and subject-object interactions can be done in the existing video.

Meta AI’s Deepfake Video Technology is Concerning

Deepfake videos have proven to be one of the most disturbing developments since the introduction of generative AI. While Meta calls the deepfake video generation capability “personalized video creation,” users are already aware of its potential to create more than just a personalized video. Through this feature, users can make anyone appear to do things they never would with a single photo of them.  

Through MovieGen, the whole process of constructing deepfake video will become more convenient and as with all the other things on the internet, there will be no control over what is being created. There are many potential cases for misuse of this feature, and it’s impossible to fully grasp its implications.

This AI model of video synthesis can insert people into different situations and places. Additionally, it can auto-generate the audio for the same, making the video more realistic. All the changes and insertions can be made through simple text prompts offered by the user. 

While Meta has made significant strides in video generation, the company recognizes that its current model still has limitations. To address this, they aim to enhance the video generation speed and overall quality by further expanding their models. Meta is also trying to implement the capabilities of video generative AI to enhance the creativity of artists without it being misused by its users.

The post Meta’s Video Generative AI Is Set to Master Realistic Video Creation appeared first on Technowize.

]]>
https://www.technowize.com/metas-video-generative-ai-is-set-to-master-realistic-video-creation/feed/ 0
ChatGPT’s Advanced Voice Mode Releases This Week to Limited Users https://www.technowize.com/chatgpts-advanced-voice-mode-releases-this-week-to-limited-users/ https://www.technowize.com/chatgpts-advanced-voice-mode-releases-this-week-to-limited-users/#respond Wed, 25 Sep 2024 09:41:44 +0000 https://www.technowize.com/?p=42636 The new OpenAI ChatGPT voice feature is being rolled out to users but it is still in its testing phase and will hence be limited to specific regions and smaller groups within those areas.

The post ChatGPT’s Advanced Voice Mode Releases This Week to Limited Users appeared first on Technowize.

]]>
Every few weeks the most “advanced and human-like” version of an AI service makes a debut, and this time, we’re looking at the release of ChatGPT’s Advanced Voice Mode. ChatGPT Plus and Teams subscribers will be the first to see and test the features out, which should be a nice boost to their experience using the app. From next week, Enterprise and Edu customers will be able to experience OpenAI’s new ChatGPT voice feature as well, so there won’t be a big delay for the different tiers of subscribers.

ChatGPT’s voice mode update will be released slowly to user devices and those who can enable the service will receive a pop-up on their app, notifying them about the setting. OpenAI says users can expect features such as “Custom Instructions, Memory, five new voices, and improved accents.”

OpenAI advanced voice mode access

Ready for ChatGPT’s Advanced Voice Mode Release?

If you’re keeping pace with the progress being made in the AI industry, then ChatGPT’s Advanced Voice Mode release is understandably exciting. AI services like Google’s Gemini, Microsoft’s Copilot, or the extremely familiar Siri on Apple Intelligence already have a voice mode so the arrival of OpenAI’s new ChatGPT voice feature isn’t revolutionary all on its own. However, what makes the update stand out is the quality of the voice communication. 

ChatGPT undeniably leads the game whether you talk about content generation or human-like communication—in one instance the AI was even caught imitating its user—and that is a large part of why the ChatGPT voice mode update is so monumental. The new Advanced Voice Mode will bring 5 new voices to users, namely Arbor, Maple, Sol, Spruce, and Vale. 

This will bring up the total count of voice options to 9 and provide users with a varied reserve of assistants to choose from. The ‘Sky’ voice is pointedly missing after Scarlet Johansson accused the company of using her voice without her permission to capitalize on the AI association that users would make with the movie Her

The “human-like” element of ChatGPT’s upcoming Advanced Voice Mode release goes beyond just sounding human. The AI will bring more of a personality to the table and generate conversational responses that the user might expect from a friend. The AI will be able to add humor to the conversation and allow for more natural prompts when trying to generate a response. The bot can be interrupted mid-response to reshape the direction of the conversation, which will be useful to guide the conversation towards the desired response. 

OpenAI’s Advanced Voice Mode will also give users access to an AI that is better able to understand accents and that can remember preferences and prompts while generating responses. 

ChatGPT voice mode update

Image: ChatGPT’s Advanced Voice Mode release will come with a daily limit.

Details of the New ChatGPT Voice Feature

ChatGPT Plus subscribers who have invested in the $20 USD monthly plan can expect to see the feature appear on their apps starting September 24, 2024. These subscribers will automatically see a notification on the app alerting them to the setting. 

Initially, the ChatGPT Voice Mode update will be “rolled out in a limited alpha to a select group of users” so not everyone might get a chance to experience the tool in the present, however, “all Plus users will have access by the end of fall,” so as long as you are subscribed, you can expect to see the feature soon enough. 

So who gets to be first on the list of subscribers seeing the roll-out? According to OpenAI, it “will depend on a variety of factors including but not limited to participation invitations and the specific criteria set for the alpha testing phase.” This doesn’t quite clarify who can expect to see the tool this week but users will have to cross their fingers and be patient. 

The ChatGPT advanced voice mode release will not extend to the EU, the U.K., Switzerland, Iceland, Norway, and Liechtenstein for now, so users in the area will have to wait for their turn to experience the upgraded services. 

According to CNBC, there is currently a time limit on how long you can access and use OpenAI’s Advanced Voice Mode. After using the service for about half an hour, they received a notification that there were 15 minutes remaining. OpenAI has confirmed the use clause on their page stating “Usage of advanced Voice (audio inputs and outputs) by Plus and Team users is limited on a daily basis.” 

This is likely an attempt to limit the feature to the test model, and a full rollout will extend how long users can put the service to use. Once users reach their daily limit, the conversation will end and users will be able to continue using the Standard voice tool.

OpenAI’s new ChatGPT voice feature is going to impress a lot of users and continue to disappoint others, but it is a big step forward towards the more universal integration of AI in our daily lives.

The post ChatGPT’s Advanced Voice Mode Releases This Week to Limited Users appeared first on Technowize.

]]>
https://www.technowize.com/chatgpts-advanced-voice-mode-releases-this-week-to-limited-users/feed/ 0
An Apple Visual Intelligence Search Feature Is Coming Soon https://www.technowize.com/an-apple-visual-intelligence-search-feature-is-coming-soon/ https://www.technowize.com/an-apple-visual-intelligence-search-feature-is-coming-soon/#respond Sun, 15 Sep 2024 17:08:23 +0000 https://www.technowize.com/?p=42588 The Apple Visual Intelligence AI tool will allow users to find information about the world around them using their phone cameras, collaborating with third-party services for relevant information.

The post An Apple Visual Intelligence Search Feature Is Coming Soon appeared first on Technowize.

]]>
Apple’s Visual Intelligence search feature is here to rival Google Lens, and the reaction to the new tool has been widely different. The iPhone AI visual feature was introduced during the “It’s Glowtime” event on September 9, and while Apple enthusiasts found it exciting, Android users were quick to say there was nothing new to see. 

 Apple organized the event primarily to bring the iPhone 16 lineup to its loyal customer base but we were also on the receiving end of a few unexpected software surprises, primarily all to do with artificial intelligence. The Apple Visual Intelligence AI tool is expected to come out “later this year” with future iOS updates so iPhone 16 series users have a lot to look forward to.

Apple visual intelligence AI

Image: Freepik

Apple Visual Intelligence Search—What Is It?

The iPhone’s AI visual feature will now allow users to take a picture of their surroundings to automatically look up some information in relation to the contents of the image. Using a “combination of on-device intelligence and Apple services that never store your images,” the AI tool will be able to use your camera to scan your environment and read out text for you or search for the breed of a cute dog you saw on the streets. 

Many users have pointed out that iPhones are already capable of doing this to a degree, but that doesn’t matter when you can re-advertise it explicitly as an AI tool. The tool can simplify your travel experience or work really effectively as a disability support tool for those who have visual issues, reading out or describing desired information to a user. 

This has been one of the only reasons we see the benefit of VR glasses, but having such functionality available on the smartphone doesn’t hurt either. The Visual Intelligence software could be the building blocks to future VR glasses from the company, but it doesn’t appear we’re any closer to seeing that particular product materialize.

Apart from just looking up information, the Apple Visual Intelligence search tool can also scan information from a poster or invite and add details such as the timing and the location presented in the document to your calendar, so it is more than just a scanner. Aspects like this make it more than just a Google Lens alternative, however, at the end of the day, it doesn’t offer too many other additional capabilities. 

Is the Apple Visual Search Technology Any Good?

It’s too soon to guess how well the tool will work and how accurately it can provide search results. The Apple tool may easily become a commonplace function for all users of the iPhone 16, but such a transition will take a while to occur. 

It also doesn’t help that the promotional video for the feature states that “if you come across a bike that looks exactly like the kind you’re in the market for, just tap to search Google for where you can buy something similar. The app integration is great to see, but it also doesn’t help that you could use Google Lens to do the same thing directly.

Apple did not elaborate on whether it will be able to translate text on sight, which is specifically what we use Google Lens for most often, but we’re hoping its capabilities aren’t limited to just identifying information and adding it to our calendar. 

Far be it from this solely being a duplicate Google Lens feature, other apps like Pinterest and Vinivo have their own in-built visual search tools. CamFind, PictureThis, and other similar apps function exclusively to see and identify content from your environment and give you relevant results. 

While some loyal fans of the brand are in awe of the tool, calling it “insane” and “phenomenal,” Android users are out and about, discussing Apple’s impressive ability to market a feature as entirely new.

google lens alternative

How Will the iPhone’s AI Visual Feature Work?

Apple’s visual search technology is only a button away for iPhone users, or to be more specific, it is for iPhone 16 users. The iPhone AI visual feature can be activated by the new Camera Control button available on all four models of the new iPhone 16 series. Using the handy button users won’t have to hunt for the feature but will be able to pull it up quickly to learn what they need to know about their environment.

The feature will support third-party integration so if you want ChatGPT’s help with homework, you will be able to reroute the information you capture to other apps. This makes it easier to integrate other AI tools that do more than just look up immediate information, allowing the Visual Intelligence to be more comprehensive by default.

Apple’s AI features have been reserved for the iPhone 15 Pro and all 16 series devices. With access to AI in the iPhone being limited already, the Apple Visual Intelligence search feature could further be restricted to iPhone 16 users. Apple hasn’t confirmed whether you strictly need the Camera Capture button to use the service or if can also just scroll through the menu to find the option to use the lens. In the former case, the access to the google lens alternative would be further restricted. This might be a strategy to push people to upgrade to the latest model.

Either way, the Apple visual intelligence AI tool should be out later this year and users will be able to get a more hands-on experience to determine how well the feature actually works. 

The post An Apple Visual Intelligence Search Feature Is Coming Soon appeared first on Technowize.

]]>
https://www.technowize.com/an-apple-visual-intelligence-search-feature-is-coming-soon/feed/ 0
Bumble’s AI Features Promise to Offer Some Friendly Advice https://www.technowize.com/bumbles-ai-features-promise-to-offer-some-friendly-advice/ https://www.technowize.com/bumbles-ai-features-promise-to-offer-some-friendly-advice/#respond Thu, 12 Sep 2024 09:01:23 +0000 https://www.technowize.com/?p=42566 AI-powered Bumble profiles and AI-supported conversations—these are some of the changes the dating app is hoping to introduce in the future.

The post Bumble’s AI Features Promise to Offer Some Friendly Advice appeared first on Technowize.

]]>
Just like other recently updated apps, Bumble is also exploring AI features to assist its users in their novel pursuit of finding love. Bumble’s new AI tools were revealed during the 2024 Goldman Sachs Communacopia Technology Conference, where CEO Lidiane Jones walked the listeners through some of the changes she envisioned for the app. 

The two main areas of integration mentioned were the AI-powered Bumble profiles and AI conversation support, both key elements of the app’s functioning. The inclusion of AI in dating apps was inevitable sooner or later, but how these apps integrate these tools is what we’re interested in seeing. 

Bumble AI for conversations

Image: Freepik

Bumble AI Features Coming Soon—Ready to Find Your Match?

Bumble has been working on AI features for a while now and already has some tools available to help kickstart conversations. Now, the company is looking to bring a few more features to their services in winter, but an exact timeline hasn’t been provided. 

Bumble’s AI plans for conversation assistance and profile photo selection was first mentioned during an investor call. During the call, Jones explained, “We have an ambitious view of how AI will enhance the value we deliver to our customers in each step of the dating journey for profile creation, discovery, engagement, and the core of our matching models.” 

Artificial intelligence is getting smarter by the day and while its ability to make decisions isn’t highly regarded, it can make some calculated assessment based on the data it gathers from its own users. 

First We Have An AI Photo Picker 

One of Bumble’s AI features include a photo picker assistant that can help users determine what images would help their profile best. For many of us, this decision-making is driven by the collective vote of close friends who scan their galleries diligently for the best images to represent us. 

Unfortunately, it’s when they aren’t around that we turn to dating apps, a majority of which are provided by Match Group to create the illusion of choice. Some outliers apps like Duolicious, Lefty, and Drybaby serve their own niche audience, but they’re primarily overshadowed by Match Group’s machinations. 

Returning to Bumble’s new tools, the upcoming AI photo picker should help users select the best images that represent them from the photo gallery, making the process of profile creation a lot smoother. According to Tech Rader, Tinder is working on a similar feature, so this could eventually become a standard service on dating apps. 

“We want the bar for profile creation to continue to be high, but we want to reduce the friction that exists for users. Users have a lot of anxiety in creating profiles. We’re going to make that as smooth as possible. So profile creation is a big one.” 

In general, AI could be very integral to the process of creating detailed profiles that best represent who you are. You can already throw your information at ChatGPT and other genAI tools for a brief bio, but these tend to sound unbelievably generic and forcefully quirky. If Bumble does introduce such a tool, they’ll have to work hard to ensure the AI assistance doesn’t cause the quality of profiles to further decline. 

Bumble AI for Conversations Starters

AI-powered Bumble profiles sound both interesting and unsettling, but another way for artificial intelligence integration appears to be in encouraging conversations. It’s another task that should ideally be human-led, but many app users struggle with starting conversations in new and original ways every single time.

Taking matters into its own electronically generated hands, Bumble is going to introduce an icebreaker feature that should help users start a conversation more naturally with the help of prompts that are personalized for them. The Friend AI pendant is ambitious enough to consider that AI messages can fill in for human conversation entirely, but Bumble’s tool is not expected to be quite so hubristic.

Bumble already has a similar AI tool for its “Bumble for Friends” section, assisting users with starting a friendly conversation with those they want to build a bond with. Unsurprisingly, this text generation happens through OpenAI. 

Some of the prompts are pretty useful, creating a context for the conversation rather than the traditional “Hi, hello, what do you do?” These conversations are much easier to continue and will likely leave the app with happier, more satisfied users.

AI-powered Profiles Are Permissible, But Bumble Says No to Fake AI Profiles

Bumble’s AI features are meant to enhance the user’s experience on the app and support them in making strong profiles, finding good matches, and encouraging engaging conversations. The fake AI-generated profiles with no real people behind them detract heavily from this goal. The company has taken serious measures to eliminate such profiles, allowing users to report such accounts themselves if they ever see profiles using AI-generated photos.

Bumble isn’t leaving matters entirely to users either. The dating app owner also released a Deception Detector tool, combining AI moderation with human supervision to identify fake profiles. A large section of the internet is now run by AI bots and they plague every corner of the online world, even dating apps. Some of these fake profiles are very obviously AI-generated, but in cases where people add their own touch of realism, they can be dangerously misleading. 

The rise of AI integration in dating apps has its own pros and cons, so while Bumble works on its AI features, it’s reassuring that it is also working on putting checks in place to limit the misuse of such technology. AI tools that arrive in the future need to be thought out carefully.

Bumble Founder and Executive Chair Whitney Wolfe Herd had some interesting ideas to share during an interview with Bloomberg, explaining that someday, users may be able to talk to an AI “dating concierge” about insecurities, essentially seeking the kind of feedback a more rational individual might find from a friend or therapist. She even floated the idea of the concierge going on dates with other concierges to find good matches and determine if their human counterparts could also have a good date. Hopefully, this is a description of a vastly distant future.

The post Bumble’s AI Features Promise to Offer Some Friendly Advice appeared first on Technowize.

]]>
https://www.technowize.com/bumbles-ai-features-promise-to-offer-some-friendly-advice/feed/ 0
Another Handy Recorder—Presenting the NotePin Wearable AI Tech https://www.technowize.com/another-handy-recorder-presenting-the-notepin-wearable-ai-tech/ https://www.technowize.com/another-handy-recorder-presenting-the-notepin-wearable-ai-tech/#respond Sat, 31 Aug 2024 10:01:17 +0000 https://www.technowize.com/?p=42497 The NotePin is an AI wearable transcription service that can listen in on meetings and create detailed transcripts or summary notes depending on your selected templates.

The post Another Handy Recorder—Presenting the NotePin Wearable AI Tech appeared first on Technowize.

]]>
It’s 2024 and you still haven’t adorned yourself with AI? This won’t do. Lucky for you, PLAUD’s NotePin wearable technology is willing to volunteer to help you address this issue. The NotePin AI tool is a new wearable that promises to improve the quality of your life, particularly by boosting your productivity. Like the Limitless AI pendant, this AI wearable primarily offers a transcription service to help you record conversations that you can review later at your convenience.

The PLAUD wearable AI is “an ultra-thin, ultra-light, wearable AI device designed to function as a memory capsule and help users improve productivity and efficiency in their careers,” meaning you can now walk around with an assistant to take notes when needed.

Plaud Notepin AI

Image: PLAUD NotePin

Presenting NotePin, the Latest Wearable AI Technology

Nathan Hsu, CEO and Co-founder at PLAUD.AI calls the NotePin an “always-ready business partner” who can handle mundane tasks while you focus on what’s important to you. That’s an awfully tall claim for a gadget that’s essentially a voice recorder paired with an AI transcription service, but if it works effectively, it could be a reliable tool that many turn to, particularly those who work in the corporate sphere. 

Considering the number of meetings they attend and need to keep track of, the AI wearable could offer a convenient way to return to conversations they’ve had earlier in the day or week.

The pill-shaped design is not particularly innovative or unique, but it does provide a sleek and unobtrusive look that is much needed for a device that is likely to be used most in professional settings. It is available in three simple colors—sunset purple, lunar silver, and cosmic gray. 

The NotePin wearable tech can be worn in many ways, adding to its versatility to suit the user’s convenience. It can be used as a necklace, slapped on your wrist, or clipped or pinned anywhere onto your clothes, ready to be used as you see fit. These wearing modes make it easier to use, adjusting itself to suit the user rather than the user having to accept that the device will always stand out jarringly from their outfits.

Plaud Notepin subscription

Getting to the Heart of the Matter—What Is the NotePin’s AI Purpose?

PLAUD’s wearable AI is designed to “elevate your workflow” by tracking your conversations at the touch of a button and providing a transcript of what was said. The AI is smart enough to identify when the speaker changes, attributing precise speaker labels to ensure your transcripts still make sense to you later on. The device comes with some preset, professional templates that you can select through the app, ensuring the records are structured in a way that suits you and your needs best.

Thankfully, the device doesn’t keep recording the whole time you equip it. It starts its job at the touch of a button. This is useful to at least hint to the other individual that they are being recorded, but that’s the bare minimum you would expect to see in a device such as this one. It also has a privacy light that should come on when it starts recording, alerting others to its presence. It does not come with a consent mode like the Limitless AI recorder does, which is a big miss in our books.

With support for 59 languages, you can choose between the GPT-4o or Claude 3.5 Sonnet LLM models, and the AI will not only provide a transcript, but also summarize and analyze your recordings so you can see the most important details at a glance, The recorded information can be viewed through the app or website, and the AI tools will allow you to search for information from your recordings so you don’t have to manually scroll to find the details of a meeting that took place last week.

The NotePin AI’s purpose is clear, but does it have enough battery life to serve its goals? It does. The device can support 20 hours of continuous recording and has a 40-day standby time when not in continuous use. If you use the NotePin wearable tech solely for meetings and discussions, you likely should be able to get a couple of days of use out of it.

The PLAUD NotePin Subscription Plan Is Quite Expensive

PLAUD’s NotePin AI is priced at $169 USD. That’s not too bad of a deal if you plan on using it regularly to save time and energy spent on meetings, discussions, and other conversations. However, to get the full experience of using PLAUD’s wearable AI for extended periods, you’re forced to consider using a subscription. Just like the Humane AI Pin, the NotePin has a subscription plan laid out to enjoy the use of the gadget, but unlike Humane, the plan isn’t unnecessarily expensive, nor is it compulsory.

You can enjoy all of the features and functionalities of the NotePin wearable technology for free and this includes unlimited cloud storage, audio and text importing, and multiplatform integration. The only catch is that you only have access to 300 monthly transcription minutes per month and only over 9 professional templates for your summaries. 

The Pro subscription plan costs $6.6 USD per month billed annually at $79 USD per year. This gives you access to 1,200 monthly transcription minutes, over 20 professional summary templates, custom templates, and the Ask AI tool. When new PLAUD AI features are released, the Pro plan users will be the first to benefit from them. 

The PLAUD NotePin subscription plan doesn’t appear essential for casual users or those who only have to record 1-2 really brief meetings every other day. In a 30-day month, free users will be able to record approximately 10 minutes of content every day. If you only have occasionally long meetings where you put the gadget to use, the free plan sounds feasible. But for anyone who really intends to put the device to full use, a subscription plan will be necessary.

Notepin AI purpose

Image: The different wearable modes of the NotePin AI gadget

Final Thoughts on the PLAUD’s Wearable AI

Productivity-based AI wearables offer a much wider and more useful set of applications than companion AI devices like the Friend pendant that we discussed earlier, but all of them evoke an equal amount of concern when it comes to the potential for misuse. 

From smart glasses to these AI devices, many tech makers believe that recording every moment constantly is the technology we all need to improve our lives. While having more concrete records of experiences that our brains fail to hold onto in detail is helpful, there is the fear that we are perhaps trying too hard to capture and record every single detail in high definition despite being less likely to actually go over all of this recorded content.

Still, the NotePin wearable technology is perfect in the sense that it doesn’t overpromise and doesn’t offer to revolutionize our lives in a way that it will ultimately be unable to live up to. The Rabbit R1 and Humane Pin did just that, and both have been found lacking in their application. PLAUD’s NotePin AI could save users a lot of time with its abilities and could be applied in many contexts other than meetings. 

Students might find the ability to record and review classes helpful, and those who work with picky clients might find it much easier to reflect on discussions with them. There is the matter of the other individuals being unable to consent to the recording, but considerate users of the device can always ask the other individual before recording them.

PLAUD opened up pre-orders for the device on August 28, 2024, and it will be available on Amazon starting in November. If you’re thinking about getting your hands on the device, now is the time to place your orders.

The post Another Handy Recorder—Presenting the NotePin Wearable AI Tech appeared first on Technowize.

]]>
https://www.technowize.com/another-handy-recorder-presenting-the-notepin-wearable-ai-tech/feed/ 0