IMG-LOGO

News Feed - 2023-08-23 07:08:06

Amaka Nwaokocha13 hours agoOpenAI gets lukewarm response to customized AI offeringThe company said the fine-tuning facility and the training data used for fine-tuning are scrutinized via a moderation API and the GPT-4 powered moderation system.936 Total views9 Total sharesListen to article 0:00NewsJoin us on social networksOpenAI has introduced the option of fine-tuning for GPT-3.5 Turbo, enabling artificial intelligence (AI) developers to enhance performance on specific tasks using dedicated data. However, developers have expressed criticism as well as excitement for the development.


OpenAI clarified that through the process of fine-tuning, developers can customize the capabilities of GPT-3.5 Turbo according to their requirements. For example, a developer could fine-tune GPT-3.5 Turbo to create customized code or proficiently summarize legal documents in German, using a data set sourced from the client’s business operations.You can now fine-tune GPT-3.5-Turbo!

Seems like inference is significantly more expensive (8x more) though.

My guess is that anyone with the ability to deploy their own models won’t be swayed by this. https://t.co/p2LbSq4D2H— Mark Tenenholtz (@marktenenholtz) August 22, 2023


The recent announcement has sparked a cautious response from developers. A comment attributed to an X user named Joshua Segeren said that while the introduction of fine-tuning to GPT-3.5 Turbo is intriguing, it’s not a comprehensive fix. Based on his observations, improving prompts, employing vector databases for semantic searches or transitioning to GPT-4 often yields better results than custom training. Furthermore, there are factors to consider, such as setup and ongoing maintenance costs.


The foundational GPT-3.5 Turbo models commence at a rate of $0.0004 per 1,000 tokens (the fundamental units processed by extensive language models). However, the refined versions through fine-tuning carry a higher cost of $0.012 per 1,000 input tokens and $0.016 per 1,000 output tokens. Additionally, an initial training fee linked to data volume applies.


This feature holds significance for enterprises and developers aiming to construct personalized user interactions. For instance, organizations can fine-tune the model to harmonize with their brand’s voice, ensuring that the chatbot exhibits a consistent personality and tone that complements the brand identity.


Related:Top UK university partners with AI startup to analyze crypto market


In ensuring responsible use of the fine-tuning facility, the training data used for fine-tuning undergoes scrutiny via their moderation API and the GPT-4 powered moderation system. This is done to maintain the security attributes of the default model throughout the fine-tuning procedure.


The system strives to detect and eliminate potentially unsafe training data, thereby ensuring that the refined output aligns with OpenAI’s established security norms. It also means that OpenAI has a certain level of control over the data that users input into its models.


Collect this article as an NFTto preserve this moment in history and show your support for independent journalism in the crypto space.


Magazine:AI Eye: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4# Business# Technology# AI# ChatGPTAdd reactionAdd reactionRead moreHow to use index funds and ETFs for passive crypto incomeAI is helping expand accessibility for people with disabilities5 strategies to mitigate side channel attacks on cryptocurrency hardware wallets