return to table of content

GPTs: Custom versions of ChatGPT

atleastoptimal
57 replies
22h10m

I wonder how much money I could make making "GPTs" full time. Barrier to entry is nonexistent so I imagine highest revenue ones if this becomes a serious thing people use will be advertised externally or have some proprietary info/access.

jatins
20 replies
22h3m

I wonder how much money ...

I saw that announcement and my immediate thought was "God yet another thing passive income youtubers will be shilling soon"

In general, I was a little confused by this. Sam's demo of creating a GPT didn't seem particularly exciting as well.

atleastoptimal
14 replies
22h1m

OpenAI in a weird way has mediocre marketing. The examples they use for Dalle-3 are way worse than the average ones I see people cooking up on Twitter/Reddit. They only seem to demo the most vaguely generic implementations of their app. Even their DevDay logo is just 4 lines of text.

RobertDeNiro
10 replies
21h52m

Just the fact that they decided to stay with what is essentially a highly technical acronym i.e. GPT, as their major product line says a lot.

andrewmunsell
3 replies
21h40m

To be fair, the name "ChatGPT" has quite a bit of mindshare and I've found many non-technical folks referring to any generative AI product as "ChatGPT" or "GPT". Yet, if you asked any single one of them what "GPT" stood for, they'd have no clue.

JCharante
1 replies
21h0m

To be fair, I'm a dev who uses chatgpt on an hourly basis and I had no idea what GPT stood for until I googled it just now. I think it's kinda smart to make people strongly associate GPT with OpenAI

toomuchtodo
0 replies
20h47m

"Generative Pre-trained Transformers" for those who don't want to leave the page.

singularity2001
0 replies
20h56m

I've heard all permutations of "GPT" "GPP" "GTP" etc.

falcor84
1 replies
21h41m

I think that's actually crucial in that they want to trademark this otherwise generic term

Aerbil313
0 replies
21h26m

Omg… Thinking about their push for regulation with this… Are they after something like keeping advanced generative pretrained transformer LLM model technology to themselves, prohibiting others, at least in American economy where regulations can be applied?

shawnc
0 replies
21h11m

I don't recall which interviews I saw it stated in, but I believe Sam said in one or two of his world tour stops, where he stated they deliberately have gone with a technical name instead of a human name to help remind those using it that it's not a person. So I think that coupled with the mindshare (as others have stated) it already holds, makes a lot of sense to stick with it.

manojlds
0 replies
21h16m

Better that than do a silly rename ala X.

imdsm
0 replies
21h43m

And yet in a way I find this refreshing

block_dagger
0 replies
21h40m

I think it might be the ubiquity of the term "GPT" as it relates to OpenAI from a public branding perspective.

minimaxir
1 replies
21h25m

Sam's "all our marketing is from word-of-mouth" was refreshingly honest.

toomuchtodo
0 replies
21h21m

Would be somewhat humorous to plug OpenAI into ad platforms, give it budget, and say "go market yourself as effectively as possible."

JackFr
0 replies
21h25m

Need some Boston Dynamics flair.

Terretta
2 replies
3h30m

shilling soon

Study the Poe ecosystem, and look at YouTube or Reddit.

. . .

EDIT: adding the detail from my comment 2 below, I'm referring to...

Bot creator monetization:

https://developer.poe.com/resources/creator-monetization

Earn money when:

- your bot brings a user to Poe for the first time and they eventually subscribe

- your bot brings a user back to Poe and they eventually subscribe

- your bot’s paywall is seen just before a user subscribes users send messages to your bot (starting soon)

jatins
1 replies
2h29m

Out of loop on Poe. What's happening there?

Terretta
0 replies
2h1m

Bot creator monetization:

https://developer.poe.com/resources/creator-monetization

Earn money when:

- your bot brings a user to Poe for the first time and they eventually subscribe

- your bot brings a user back to Poe and they eventually subscribe

- your bot’s paywall is seen just before a user subscribes users send messages to your bot (starting soon)

conradev
1 replies
20h36m

I would pay money for a GPT that was incredible at naming types in Swift

I’d pay for one that was good at programming rubber ducking

There are specific sub-tasks that everyone would pay for to make their lives easier. This marketplace is trying to make that efficient

NiagaraThistle
0 replies
20h29m

I find NORMAL ChatGPT a good programming Rubber Duck and use it often.

stevesearer
5 replies
22h6m

I bet if you combined GPT creation with Zapier integrations you could help a lot of people and companies.

toomuchtodo
2 replies
22h0m

Thoughts on Zapier trying to become OpenAI faster than OpenAPI can become Zapier? There will always be a long tail of APIs that folks want integrating, but the most popular APIs are perhaps only a few hundred in number (Google Calendar and Slack, for example).

jimmyl02
1 replies
21h53m

I feel like history has shown that those who own the platform end up winning and in this case OpenAI's platform of models seems much harder to recreate. My guess is this would lead to Zapier using OpenAI as a platform and eventually OpenAI would re-create Zapier's integrations before the other way around.

toomuchtodo
0 replies
21h48m

I think this perspective is fair and historically accurate wrt platform risk, but I also strongly believe Zapier has substantial value beyond what historically has been acting as a conduit between APIs. Customers don't want a pipe between their services, they want to automate their mundane work with a robot.

JCharante
1 replies
20h58m

So many zapier integrations are half baked. A lot of them are good for reacting to events but not for searching for data (i.e. you can use zapier to react to a jira ticket change but can't use zapier to query jira for ticket info)

parkerhiggins
0 replies
20h14m

You can, just have to use JQL. Zapier can create a conditional trigger if the JQL criteria are met and pool on a set cadence, such as 15 mins.

oezi
5 replies
22h3m

How many people made any money from the plugins for ChatGPT?

minimaxir
2 replies
21h57m

Plugins failing was more due to lack of visibility, which a GPT Store does solve.

singularity2001
0 replies
20h53m

How does the store solve visibility? In the demo it looked like there was a select list of Custom Assistants in the left panel which he manually had to click, so not much different from plug-ins?

Right he said something about promoting the best but what about the discoverability?

manojlds
0 replies
21h10m

Depends on features. How much data can I give it and how much can I customize it?

avarun
1 replies
21h23m

ChatWithPDF made a TON of money, for one.

euazOn
0 replies
20h18m

Source?

bilsbie
5 replies
21h53m

I’m not clear who the market is? Why someone buy one?

imdsm
4 replies
21h48m

Imagine a "GPT" that could generate websites and provide you with a live deployment as you change it using natural language. A website builder GPT that is primed to output and design in a decent way, that has all the prep beforehand to use particular libraries, and integrations with something like Render.

People would pay for that.

Sh--... I better build it!

block_dagger
1 replies
21h38m

One can come up with all sorts of ideas like this, but building it will be a matter of slow iterations at prompt engineering in a mixture of natural language and data structures and will be at the whim of changing APIs, including the backing ChatGPT model. Sounds messy, hard to manage, hard to test...or am I missing what the actual process will be for creating one of these?

mirekrusin
0 replies
21h29m

Worry not, surely there will be GPT for creating GPTs.

user_7832
0 replies
21h38m

All fun and games until someone sues because the "hallucinated" product description didn't match the product...

alvah
0 replies
13h44m

ZipWP already does something close to this for WordPress sites.

siva7
3 replies
22h7m

I'm not sure i understand?

atleastoptimal
1 replies
22h5m

The GPT's making the most money will be made by larger companies who advertise use of it and maybe make it a funnel to their in-app integration, or GPTs which are made effective by information that is proprietary.

ca_tech
0 replies
21h30m

This makes me think of the Alexa skills ecosystem which is full of low quality skills. Many of which have poor data practices abstracted away behind the scenes. How long until "Chat with your favorite character from [Intellectual Property]?" which is simply made to promote a new film or collect data.

A good paper on the state of the Alexa skills BTW

SkillVet: Automated Traceability Analysis of Amazon Alexa Skills https://ieeexplore.ieee.org/document/9619970

bbor
0 replies
22h5m

Part of the announcement is that they'll open up an app store w/ revenue sharing of some kind. "In the coming months" or smtn

Uehreka
3 replies
21h26m

I wonder how much money I could make making "GPTs" full time

I don’t get why people are thinking along these lines at all. Like, if you don’t own and control the LLM yourself, what makes you so sure OpenAI will allow you to make money at all? They could make advertising externally or hosting external marketplaces against the TOS. They could copy your GPT and put their “official” version at the top of the store page. Just because a technology is powerful does not necessarily mean you can make money off of it.

stale2002
0 replies
20h32m

The answer is because the bot market is a creator economy market.

It takes significant effort to come up with good use cases, build the prompts, and advertise the bots.

So a company can get a lot of value by going after this up and coming type of "content creator".

michaelmior
0 replies
21h17m

what makes you so sure OpenAI will allow you to make money at all?

They might not. But if they do, I'd imagine there are a lot of people who will try. And as long as you're not dependent on the income stream they provide, you don't have much to worry about if it gets shut off.

jes5199
0 replies
20h57m

ChatGPT is the new iPhone. Dealing with a walled-garden app store is never a great experience, but we'll do it because that's where the users are

colesantiago
2 replies
22h3m

I agree.

Anyone with factual data (proprietary or not) is now an input away to AI / GPTs.

Data (or a new foundational model) is now the moat.

j7ake
0 replies
21h53m

Data has always been the moat no ?

altdataseller
0 replies
22h0m

What sort of data do you think will be the most valuable to input to AI?

ilaksh
1 replies
21h12m

Right now, zero. They didn't say when it was rolling out or what percentage they would share. It might be something like 10%. Lol.

atleastoptimal
0 replies
20h53m

I will make a bot that makes GPTs, and a bot that makes other bots that make GPTs

CSMastermind
1 replies
21h42m

I'm more confused how the revenue share works. Do they get part of my ChatGPT subscription fee? Am I paying extra? Per bot? Per amount of time I consult with the bot?

zwily
0 replies
20h45m

Yeah he didn’t explain at all how GPTs would be monetized.

torginus
0 replies
21h33m

And seeing how OpenAI is moving up the value chain, what's the guarantee they won't come up with an in-house competitor to the bot that was built on their platform?

ren_engineer
0 replies
21h25m

not much based on what OpenAI has been doing lately, using their own customers as product research and then copying the best ideas. OpenAI pretty much has to keep a huge lead in model capabilities or developers are going to stop using them for this reason

basically copying the Microsoft strategy of Embrace, Extend, Extinguish. Makes sense they took so much funding from Microsoft

bbor
0 replies
22h5m

I think something could be said for "virality" as well - could easily see some entertainment or lifehack themed templates blowing up on TikTok. No one wants to post the output of the lame, less popular template on their story!

siva7
19 replies
22h8m

So i guess the next wave of startups has been killed. I'm not even sure what kind of startup still makes sense as a gpt thin wrapper?

ethanbond
4 replies
22h5m

None of them, but here's the thing: they never made any sense.

alvah
1 replies
13h39m

Conversion.ai / Jarvis.ai / Jasper.ai / whatever they’re called today raised $125M at a $1.5b valuation, so it made sense at some point to some people.

ethanbond
0 replies
4h15m

1) Jasper isn’t exactly what I’d call “thin.” It has e.g. an entire project management suite inside it now. They moved very fast into a vertical SaaS platform, probably because they knew that a thin wrapper was neither a particularly good product nor a defensible one.

2) There are tons of actually thin wrappers that raised a ton of money — contrary to popular belief this does not mean they are good businesses (Jasper, AFAICT, doesn’t seem to fall into the categories of either “thin wrapper” or bad business)

3) Obviously anything being called out as not making any sense allegedly “made sense” to some people at some point otherwise it wouldn’t be worth talking about.

constantly
0 replies
21h50m

They never made sense long term, of course. But, plenty of first movers made a bundle of money making chatgpt wrappers and marketing the hell out of them. In that context, they probably made sense for a small subset of people for a small slice of time.

RobertDeNiro
0 replies
21h50m

Yeah none of them ever did, and that was always very obvious. If you can make it by wrapping a few API calls, you have no moat and anyone can steal your idea/customers.

colesantiago
3 replies
22h2m

This meme is getting very old and tired.

What kinds have been 'killed' I don't see this anywhere.

Tankenstein
2 replies
21h52m

Many startups have started over the past few years, trying to build infrastructure (shovels) for companies to integrate LLMs, or specific chat copilots trying to cater to a specific usecase. Most are dead in the water once OpenAI subsumes their feature set.

BoorishBears
1 replies
21h10m

... is what people who don't understand positioning will parrot time and time again.

Jasper isn't having a good time, but you'd think the fact anyone can produce better output than they did after spending millions of dollars in GPT-3 based pipelines for $20 a month would mean they're dead dead.

But instead they went and changed their positioning, changed who their target market is, adjusted the UX, the messaging, and the feature set, and now it's a product that has a place even if OpenAI can give all of your marketers an internal ChatGPT (unless your plan is to have 100 different "GPTs" for every marketing task in your company)

tl;dr: People fail to realize that OpenAI can offer your startup's core value proposition tomorrow morning, and it doesn't matter if they don't offer it in a format that resonates with your target users.

You could have a cure to cancer and you'd still have to market it correctly.

Sai_
0 replies
20h14m

Isn’t that the crux of the conversation? Jasper had to fight to survive because they were a wrapper.

levmiseri
2 replies
21h42m

Some use cases are still valid I hope. E.g. content generation on a massive scale. An example of that that I'm tinkering with: https://meoweler.com

anigbrowl
1 replies
20h48m

Can I ask what the goal is here? It's cool to be able to spin up a nicely designed website (and it is nicely designed and has a good aesthetic), but isn't the content going to be semantically empty? I don't want to be negative but it feels like cheap plastic imitation of a real thing, and the web is already full of spammy low quality content. Aren't you in danger of your products having a very short life cycle and ending up as digital landfill, so to speak?

levmiseri
0 replies
20h17m

This specific manifestation will likely have the fate of a digital landfill, but I'm generally excited about the prospect of mass content generation in some specific domain use case (as long as it's a field where the quality either doesn't matter all that much or what GPT4 spits out is sufficient).

This particular project is mostly messing around and learning how to use the API, but even here I was surprised about the overall quality of the generated content.

dragonwriter
1 replies
21h49m

I'm not even sure what kind of startup still makes sense as a gpt thin wrapper?

Startups don't make sense as thin wrappers around another company's product when that company is aggressively expanding that product is a core offering that the vendor is aggressively working to provide as an integrated solution for as many markets as possible, which very much applies to OpenAI offerings in general, and its chat models especially.

A wrapper that also leverages some exclusive special sauce data, algorithm, etc., for which you have a real moat as a key component, that makes some sense. But just a thin wrapper around GPT? That's just asking to have your market eaten by OpenAI.

It might make some sense for products where the vendor is a stable, steady-state infrastructure supplier for many markets without any evident interest in entering the same market as the startup, where the uncertainty across markets that they would create by specifically targeting your startups market would hurt them more with their established customers than they would gain from your niche, but even that is risky because it requires lots of potentially-erroneous assessments of how the vendor would expect their other customers to react to them acting in your market.

hackerbeat
0 replies
7h12m
api
1 replies
21h49m

Thin wrappers around anything never make much sense. They're trivially replaceable.

audiala
0 replies
21h28m

Even if they create a great UX/UI? GPT would be like the motor of the car, there is still all the structure to build around it.

tacone
0 replies
21h40m

It's just commoditization, hard things now becoming a lot easier and value proposition to move up elsewhere.

It happened with hardware, operative systems, web tools such as maps etc.

sergiotapia
0 replies
22h0m

If your startup is just a thin wrapper for GPT, it's DOA regardless. Trivial moat is always a trivial moat.

You must use LLMs as a launching point to something else, some kind of 10x in a vertical.

Being able to "chat" is table stakes and worthless to you as a company by itself.

Workaccount2
0 replies
20h11m

It was (is) basically a smash and grab. You can spin up a site and a payment processor so fast nowadays that it just takes a weekend of work to go from idea to "Click here to subscribe and get 200 [AI generation] a month!"

mg
17 replies
21h43m

So these "GPTs" are the combination of predefined prompts and custom API access? Not customly trained LLMs?

If so, I guess you can make such a "GPT" on your own server and independent from a specific LLM service by using a prompt like

    ...you have available an API "WEATHER_API". If you need the
    weather for a given city for a given day, please say
    WEATHER_API('berlin', '2022-11-24')
    and I will give you a new prompt including the data you
    asked for...
Or is there some magic done by OpenAI which goes beyond this?
jondwillis
8 replies
19h7m

“GPTs” (terrible name) === Agents

golol
5 replies
11h3m

Not even. An agent should primarily operate in some kind of self-prompting loop. Afaik xou can not specify complex and branching looping behaviors for GPTs.

zer0c00ler
3 replies
9h54m

ChatGPT and plugins were already agents. An agent just does stuff on somebody else’s behalf. Browsing, browses the Internet for the user, code interpreter creates and runs Python code for the user…

What you describe are what’s called autonomous agents, and i agree that I also expected some interesting announcement there but probably too many security issues.

golol
2 replies
8h37m

Well, I would say that when people at the moment say agent they mean what you call autonomous agent. Browsing or code interpreter is just LLM + tool use. I think there is a really large difference in quality between a system that just interacts 1-2 times with some tool/API before giving an answer and one that runs in a loop with undefined length (until it decides to terminate). It's like the difference between programming without loops or recursion vs with them. Night and day.

jondwillis
1 replies
2h15m

Currently the loop results in compounding errors and attention failure. Even with reviews, grounding information, and veracity checks in place by other LLMs it still happens.

golol
0 replies
35m

Yes, agents don't really work yet, I agree.

jondwillis
0 replies
2h13m

You can have a GPT/Assistant/Agent (OpenAI is using all three terms) call out to an API that has a looping agent behind it.

ddmma
0 replies
9h0m

Exactly my point, then you can register .agency domain based on your area of ‘agent’ expertise.

Sorry, but I won’t bite the marketplace as normally wipe the top agents with official ones, like apple did with native apps.

It’s crowdsourced AGI

arcanemachiner
0 replies
13h44m

Agreed, but one is a recognizable trademark (I assume) and one is a generic catch-all term that means a lot of things and would probably be worse from a branding/marketing perspective.

BoorishBears
5 replies
21h40m

If you want to be independent for academic/personal reasons, sure you can.

If you want reasoning capabilities that Open Source hasn't matched before today, and I'm guessing just got blown out of the water on again today... there's no reason to bother.

mg
3 replies
21h38m

You don't need to use an open source LLM for the approach I described. You can still send the prompts to OpenAI's GPT-4 or any other LLM which is available as a service.

BoorishBears
2 replies
21h18m

What other LLM will compete with GPT-4 Turbo (+ V)? At most you're hedging that Anthropic releases a "Claude 2 Turbo (+ V)": is complicating your setup to such a ridiculous degree vs "zero effort" worth it for that?

If things change down the line the fact you invested 5 minutes into writing a prompt isn't going to be such a huge loss anyways, absolutely no reason to roll your own on this.

dragonwriter
1 replies
21h4m

If things change down the line the fact you invested 5 minutes into writing a prompt isn't going to be such a huge loss anyways

If things change down the road such that your tool (or a major potential downstream market for your tool) is outside of OpenAI's usage policies, the fact that you invested even a few developer-weeks into combining the existing open source tooling to support running your workload either against OpenAI's models or a toolchain consisting of one or more open source models (including a multimodal toolchain tied into image generation models, if that's your thing) with RAG, etc., is going to be a win.

If it doesn't, maybe its a wasted-effort loss, but there's lots of other ways it could be a win, too.

BoorishBears
0 replies
20h14m

Let's go back to my very first comment.

If you want to be independent for academic/personal reasons, sure you can.

If your goal is to be in business, or to get some sort of reach, or anything other than "have fun tinkering"... wasting developer weeks on "could be a win" is how to fail.

dragonwriter
0 replies
21h8m

If you want to be independent for academic/personal reasons,

Or OpenAI's usage policy limits reasons (either because of your direct use, or because of the potential scope of use you want to support for downstream users of your GPT, etc.) Yes, OpenAI's model is the most powerful around. Yes, it would be foolish not to take advantage of it, if you can, in your custom tools that depend on some LLM. Depending on your use, it may not make sense to be fully and exclusively dependent on OpenAI, though.

jatins
0 replies
21h33m

Yeah I don't _think_ there is a lot "magic" in custom GPT.

However creating something like this previously required a Jupyter notebook but now just...asking for it. Makes it accessible to 10x more people

burningion
0 replies
21h36m

That's the thing, we don't know.

The lack of transparency for how the product works behind the scenes will most likely make it difficult to build something effectively.

alvis
12 replies
22h34m

I have been utilising GPT to create my own app, and now openAI wants to be the only app that matters. I’m not sure whether I should be excited or not :/

fatherzine
5 replies
22h8m

Every prompt you put in somebody else's LLM goes into the training set of the next iteration of said LLM, with the explicit purpose of replacing you as a cognitively, and therefore economically, relevant entity. The only dignified move is not to play, though it's a very difficult choice. It probably not a winning move, though at this point there are no obvious winning moves -- you and I and all our loved ones will be obsoleted and replaced by tech within the next few years. Concretely, to not play means to stop feeding the machine data, i.e. disconnecting from the digital world. Given how digitalized society is becoming, possibly also from the modern society altogether. Godspeed.

willsmith72
2 replies
21h49m

you and I and all our loved ones will be obsoleted and replaced by tech within the next few years

What is this? When has this ever proved true, despite being spouted throughout all history? It's such an easy, throwaway and meaningless sentence.

Please explain further. Replaced how? By what? What's going to happen to the humans in the next few years?

Aerbil313
1 replies
21h16m

What is your job? Chances are, it’s nothing an AGI (based on LLMs) can’t do, and an AGI is possible, today. People are building these things today, check out GitHub. And if you don’t believe GPT-4 cannot do your job cheaper than you, just wait for GPT-N, which will be able to.

willsmith72
0 replies
20h35m

Progress isn't linear. Point me to an AGI on GitHub. Our definition of work changes based on the greatest possible technology at any given moment.

oezi
0 replies
21h58m

I think all technological revolutions have caused similar transformations which obsolete certain types of activities and push novel activities to the forefront.

Not playing is certainly possible but could be a losing strategy as well.

bbor
0 replies
21h57m

I like to think about it from the perspective of the far future, looking back on me as a historical actor. I have no idea what will happen exactly, of course, but I can't imagine a moral/social crisis of the past where "cross your fingers and hope it goes away" is a move I'd approve of...

That said, your worry is one I definitely share. I guess I just hope more people think of ways they can try to ride/shape this wave, rather than stop/weather it.

topicseed
1 replies
22h21m

They will allow revenue sharing so then it's a matter of how many customers they'd be able to offer you for your app; and whether it makes sense for you to distribute through them or bypass them and distribute yourself.

elforce002
0 replies
22h2m

This business model will only serve them in the long run. Luckily for us, open source llms are getting traction and we won't depend on "open"AI to implement features on our apps.

zeroCalories
0 replies
22h6m

When your startup is a repackaging of another company's tech I don't see how you can be surprised when a big player swoops in to kill you off.

mickdarling
0 replies
22h14m

Well, they tried to put a government sponsored moat in the way of other people building AI companies that would be competing with them. Thankfully, they mostly seem to have whiffed the ball on that one. This plan of theirs to monetize the creation of agents and other tools that take advantage of their underlying infrastructure is a good secondary kind of moat. Because, if your tool relies on their underlying infrastructure, even if you could build something different, the infrastructure is required. This may be a "less-evil" way to keep them building things and making tools available without completely locking out competition.

cornholio
0 replies
22h3m

What did you expect? Surely it was just an MVP, and you expected OpenAI to commoditize its complements? Right?

On the long run, if your idea or app can be expressed as a flavor of a general GPT, you will not be able to compete with the AI gorillas. The space for AI startups is with custom, highly niched data or capabilities, that cannot be found in a general corpus, or that you can uniquely generate or control.

ahmedhosssam
0 replies
21h48m

I thought about the same thing, I've seen a lot of apps that have similar ideas like "ChatGPT chatbot for your data or your website", I don't know how will they deal with it.

teabee89
9 replies
22h3m

"Example GPTs are available today for ChatGPT Plus and Enterprise users to try out including Canva and Zapier AI Actions." and yet as a paying ChatGPT Plus customer, neither the Canva nor the Zapier AI Actions link work for me, I get a "GPT inaccessible or not found" error for Canva or Zapier.

nomel
7 replies
21h4m

OpenAI is a sane company. They do rollouts for new features.

joshstrange
6 replies
20h19m

No, they just lie in their marketing

Example GPTs are available today for ChatGPT Plus

or

Starting today, no more hopping between models; everything you need is in one place.

Neither of which are true. I'm a paying user and I have access to neither. They do this _all the time_. They announce something "available immediately" and it trickles out a week or more later. If they want to do gradual rollouts (which is smart) then they should say as much.

omarfarooq
5 replies
17h59m

Are you sure? Both of those went live at 1 PM PST.

zer0c00ler
0 replies
9h49m

No, it’s 11:24 pm pst and not available for me.

raesene9
0 replies
8h54m

Not live for me in the UK 8am GMT... Ideally they'd be a bit more transparent about their rollout process/timeline

namibj
0 replies
17h11m

I (Plus subscriber, EU) tried https://chat.openai.com/gpts/editor (as linked from https://help.openai.com/en/articles/8554407-gpts-faq#h_86549... ) a minute ago and got a "You do not currently have access to this feature" toast notification on orange background top of pace, along with "chat.openai.com" in the URL bar. (Chrome on Android, but besides a responsive layout, I haven't noticed discrepancies with the desktop Chromium site/interface; sadly the native Android app still shows no signs of code interpreter mode.)

The announced (and a few days ago leaked) Omni prompt also doesn't show up in the model selector. And that despite the expanded context looking very promising for the REPL-feedback-augmented code generation abilities.

astrange
0 replies
15h39m

I don't have them either. I also get things on mobile and web days apart IIRC.

KerryJones
0 replies
17h16m

I am sure, this happened to me

vinni2
0 replies
21h25m

I have the same issue, my guess is they are still rolling out.

Edit: here is the message I get:

Your access to custom GPTs isn’t ready yet. We’re rolling this feature out over the coming days. Check back soon.

nojvek
8 replies
21h18m

Googles: Custom versions of Google.

Anyone can easily build their own Google. No coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data.

The whole point of ChatGPT was go to one single place for all your knowledge needs.

The whole point of Amazon is the largest collections of things you can buy and have it delivered to your doorsteps in a few days.

I don't want many GPTs. I want one GPT that can reliably digest all information available on the internet, understand it, organize it and allow me to do useful things with it.

It's the same enshittifaction on Whatsapp that Meta is doing with Celebrities AI like SnoopDog AI that are gimmicks.

Please don't build gimmicky features. Leave that to the community via integrations.

vunderba
1 replies
21h15m

They still have that - it's just regular GPT-4. One immediate application about this one is that it makes it trivial to create a fine tuned version of GPT based on your data, where you can upload a series of documents that can basically act like a set of embeddings that augment the regular GPT trained data.

nomel
0 replies
20h52m

create a fine tuned version of GPT

No. It's unclear fine-tuning is happening. Many are guessing it's RAG.

gustavopch
1 replies
21h6m

Being able to have multiple personae could be very useful.

One persona may not give you the answer you're looking for, but another one may. I think maybe they should require the GPTs to have human names though so people intuitively understand that.

Like, Paul can't help me with this task, so let me ask Monica instead.

They could even interact.

nomel
0 replies
20h53m

A good example is creative/idea work vs fact work. You don't want creative facts, and you don't want fact bounded creativity. You either have a prompt ready to paste, to prime the conversation/context, or you can use a personality.

One "über" AI is great, but it requires guidance into the context you're interested in, including yourself. For example, the default ChatGPT will assume you're uneducated about any topic you ask about.

I think this all fits perfectly into what Sam Altman talked about in the Lex Friedman podcast: people want an AI that fits their own worldview/context. Custom instructions, and "about yourself" are good starts, but sometimes you want to talk to a chef, and sometimes a scientist.

gfodor
1 replies
21h1m

The whole point of ChatGPT was go to one single place for all your knowledge needs.

That's just your own perception. OpenAI is trying to build AGI. You entered into the storyline at a specific junction and jumped to conclusions based on the limits of your own imagination, or something.

nojvek
0 replies
20h19m

Right. All I'm sayin is I want the God level AGI, the king of kings of AGI.

Allowing me to customize ChatGPT is the same itty bitty intelligence, like putting a new mask on ChatGPT.

OpenAI changed their values to 'AGI Focus'. This seems like OpenAI losing focus.

jes5199
0 replies
21h16m

for a while, adding a custom google search to your website was considered a great feature

joshstrange
8 replies
20h22m

We’ve made ChatGPT Plus fresher and simpler to use

Finally, ChatGPT Plus now includes fresh information up to April 2023. We’ve also heard your feedback about how the model picker is a pain. Starting today, no more hopping between models; everything you need is in one place. You can access DALL·E, browsing, and data analysis all without switching. You can also attach files to let ChatGPT search PDFs and other document types. Find us at chatgpt.com.

It's so annoying how they say this, I refresh and I still have to hop between models. Just say "rolling out over the next week" if that's what's happening. I even logged out and back in and still the same old way of doing it.

column
3 replies
7h27m

What is even more annoying is every plus subscriber whining that they didn't get the feature yet. We know, you paid and you want to play now, but it's a rollout and it's progressively coming towards you. Just have a little bit of patience.

joshstrange
2 replies
5h31m

You’re completely missing the point. If they said “rolling out over the next week” it’d be one thing but they almost always say “available now” and it’s not. That’s not ok, that’s what we call a lie.

spencerchubb
0 replies
1h35m

Nobody is going to say "We are rolling it out to a random set of users, and then if there are no crashes, we will roll out to more" in a marketing demo.

Were you or anyone else harmed by the lie?

column
0 replies
2h1m

More whining...

lukejagg
1 replies
20h3m

The roll outs are quite slow. Sometimes it takes weeks for them to release a feature to my account while I know others who get access immediately. I understand that it helps them control the quality of ChatGPT, but I wish I got access earlier as I have been subscribed since the beginning.

chankstein38
0 replies
19h34m

Sometimes it seems like logging off and back on causes these updates to hit. I didn't have Dall-e or image capabilities after weeks of it being released and I logged off and back on and both were available. This was between multiple computers logged in so it wasn't just a cache clearing situation.

ryanSrich
0 replies
18h48m

The lack of transparency around how they release access to features is infuriating. Especially when you pay for the product. I cannot stand it.

chankstein38
0 replies
19h35m

For what it's worth, for me it appeared nothing was different except I could leave the Default model on, upload a picture, ask it to generate a similar one with a change, and it just booted up Dall-e and did so. I'm not sure if everyone's experience is like this. I agree it's super annoying how they handle these things. I do still have the dropdown but, set to Default, it would generate images for me.

I do recommend logging off and logging back in if it isn't working. I've seen that update things for me.

tracerbulletx
7 replies
21h38m

Can anyone explain what "extra knowledge" means specifically? Is that like fine tuning? How much data can I give it to learn? How much can it retain? Can it be updated over time?

minimaxir
2 replies
21h35m

The keynote used a .txt file of a lecture that the user uploaded as a data source the model can select from. From a technical perspective, it's anoher data source for retrieval-augmented generation (RAG) doing a vector search in the background: https://platform.openai.com/docs/assistants/tools/knowledge-...

tracerbulletx
0 replies
21h34m

Ah ok, thank you.

singularity2001
0 replies
20h48m

"optimizes for quality by adding all relevant content to the context of model calls." So for their own profits they maximize recall and let GPT handle precision.

Racing0461
1 replies
21h34m

I doublt it's fine tuning (actually changing the model weights). It's more like "im going to paste a text blob then in the following chats i will ask questions about it) type inner prompt.

singularity2001
0 replies
20h47m

For longer documents it uses vector embeddings

vunderba
0 replies
20h21m

The demo just showed a file dialog box where they could upload a set of static files. What I'd really like to see is the ability to sync a source integration (for example into a GitHub repo or a notion account), and it would always pull relevant information using a RAG architecture.

ankit219
0 replies
20h35m

They showed a demo where you could upload a file while creating an agent. As others have answered. I think it's about configuring an agent in a way that you give it material on some specific topic (one file, multiple files) and it uses Retrieval to augment the answer based on the source material.

Google launched Notebook LM[1] a while back which does a similar thing conceptually. It allows you to import a Google drive folder with docs of the stuff you would want to understand and then just chat with it. It's a good product but restrictive in the sense that it only allowed Google docs.

[1] https://blog.google/technology/ai/notebooklm-google-ai/

bbor
7 replies
22h8m

I think I speak for a few of us AI Doomers here when I say that this makes me excited and terribly anxious at the same time. So... well done OpenAI :). Great news, and a great feature!

I have no doubt that this will immensely increase uptake among the less technically literate, since it will allow the techy people in their life (or on the app store) to introduce them with much less friction. It'll be like the little examples you can find on the "New Chat" screen of every chatbot, but 1000x more engaging

cubefox
6 replies
22h4m

You clearly aren't much of a doomer yet. There is nothing exciting about a looming catastrophe.

MeImCounting
4 replies
21h49m

If the catastrophe is mega-corps getting to monopolize a valuable technology because people saw The Terminator and thought it was a documentary then any announcement from OpenAI is bad news.

cubefox
3 replies
21h14m

The catastrophe is humanity going extinct from superintelligent AI. Like a native species going extinct after an invasive species arrives. Mentioning Terminator is like saying the Earth is flat because Hitler said it is round.

MeImCounting
2 replies
20h57m

This reminds of the Eliezer Yudkowsky tweet saying that AI was going to hack our DNA and use our bodies to mine bitcoin or something. Ridiculous fearmongering.

I have probably read more sci-fi than the average HN user but the whole "superintelligent AI is going to kill us all" hysteria is among the more ridiculous ideas I have ever heard.

Really though I have entertained all the doomers propositions and none of them seem any more likely than the plot of the Matrix. The ideas that prop these fears up are based on layers of ever more far fetched hypothesis about things that do not exist. If you have a novel reason why AI poses an x-risk I am more than interested in hearing it.

Here is a really interesting quote that I think might go against some of the misanthropic tendencies of doomers and the tech crowd in general but it really is more relevant than ever: “There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behaviour-as-it-was-generally-understood – i.e. not being cruel to others – was as profound as these matters ever got.”

cubefox
1 replies
20h30m

A species doesn't automatically get more altruistic towards other species once it gets smarter. Look at how many species humanity drove to extinction.

MeImCounting
0 replies
20h21m

True humans have been remarkably ignorant throughout our short history. Though you might notice though that most folks dont go around abusing animals or hurting other people on purpose. Take from that what you will.

Give this essay a read if youre interested in good faith arguments about the danger of AI at the current state of development. https://1a3orn.com/sub/essays-propaganda-or-science.html

Maybe together as a species we can avoid hellish cyberpunk dystopias brought on by regulatory capture of the most powerful technology created by humans thusfar. I can only hope.

pphysch
0 replies
21h57m
ilaksh
5 replies
21h50m

https://chat.openai.com/gpts/editor "You do not currently have access to this feature"

Roritharr
4 replies
21h49m

Probably a staged rollout once again leaving people outside of the US wait.

judge2020
0 replies
21h45m

I'm in the US and still don't have it.

ilaksh
0 replies
21h46m

I mean, I am in the US, have been waiting for the last rollout still..

chabad360
0 replies
21h42m

Nah, I'm in the US (with a US based account) and I'm still getting the message it's rolling out over the next few days (you have to open a sample to see that message).

ccakes
0 replies
21h44m

Ahh.. is that why I’ve been on the waitlist since day 1 for ChatGPT Enterprise?

C’mon OpenAI, we’re /trying/ to give you money here!

empath-nirvana
4 replies
22h6m

Is this basically them deploying fine tuned models? It wouldn't be very interesting to just be using custom prompts.

dragonwriter
1 replies
21h59m

Is this basically them deploying fine tuned models?

From the description of the past outside practice it is marketed as moving into OpenAI's offering, it sounds more like its custom prompts, not fine-tuned models.

bearjaws
0 replies
21h24m

More specifically, it seems like a RAG (retrieval augmented generation) system than fine tuning.

bbor
0 replies
22h3m

I don't know for sure but I'd bet BIG money that these do not include automatic fine-tuning, though I still understand them to be a bit more powerful than just "custom prompts" -- think templates, or sets of custom prompts for specific (sub-)situations.

This is the kind of feature that will prove to be a minor improvement for anyone on this forum, and a complete paradigm shift for the less technically-inclined. IMO.

Method-X
0 replies
20h8m

I think OpenAI will take the most popular "GPTs" in their store and launch their own fine tunes.

trash_cat
2 replies
18h32m

Their zappier demo was uninspiring. They are trying to appeal to the mainstream (non devs) users when their actual value is the API. Everyday users don´t have access to propriatery data that makes the LLM Bots valuable.

This makes me think that open source models are the future, for other reasons as well. No company is going to give them OpenAI their data to make bots, regardless if they won't train on the data. So what OpenAI can do at the moment is to focus on everyday user, and see where the biggest use cases are.

Furthermore, as Sam alluded to, this confirms to me that multi-LLM architectures (AutoGen, AutoGPT, BabyAGI etc) is the future. It´s about pushing the ways in which we can use these models, i.e. using different architectures because it will be a while before we get into more powerful models. These is simply no compute for it at the moment. The dust has yet to settle.

x0x0
0 replies
12h31m

I disagree. There's a lot of folks using zapier that would like some light automation -- I have fielded requests from several -- who are not engineers and cannot build anything with an api. The bit where they built out the functionality with zapier and human instructions was aimed squarely at folks like this.

atleastoptimal
0 replies
18h17m

The issue is all open source models are made by huge companies with an indirect profit motive. There is too much scale advantage for open source models to be competitive in any field other than those which venture backed companies deliberately avoid (porn, illegal stuff, etc).

jes5199
2 replies
21h17m

they mentioned revenue sharing in the keynote, and I'm eager to find out how that is going to work. There isn't much money in the $20/month subscription to go around to very many other developers

ilaksh
1 replies
21h10m

What I was thinking for my own agent hub thing was to sell universal credits and charge per use or token. Then agent developers could specify what they want to charge.

jes5199
0 replies
20h22m

that's kinda interesting but I'm not sure it maps well to the value added by a GPT app. Like, I'm imagining that I'll do old fashioned API work and GPT will the UI layer - sure, the tokens are the most expensive part, but the value for the customer comes from easy access to whatever is on the backend

garbanz0
2 replies
17h59m

When I saw this, I figured it was a clear step back from simply using plugins in the main ChatGPT view. It's basically plugins, but with extra prompting and you can only use one at a time.

But if you look at projects like Autogen ( https://github.com/microsoft/autogen ), you see one master agent coordinating other agents which have a narrower scope. But you have to create the prompts for the agents yourself.

This GPTs setup will crowd-source a ton of data on creating agents which serve a single task. Then they can train a model that's very good at creating other agents. Altman nods to this immediately after the GPTs feature is shown, repeating that OpenAI does not train on API usage.

Prediction: next year's dev day, which Altman hints will make today's conference look "quaint" by comparison, will basically be an app built around the autogen concept, but which you can spin up using very simple prompts to complete very complex tasks. Probably on top of a mixture of GPT 4 & 5.

LouisvilleGeek
1 replies
17h42m

Curious if anyone that has access can verify that we can still build "actions" via localhost like we previously did with plugins?

mvkel
0 replies
9h26m

Confirmed. Actions don't look to be deprecated

awfulneutral
2 replies
22h12m

The icon for the game explainer one is a die with two 5s on it. I wonder if they use ChatGPT to write their blog articles as well...

timdiggerm
0 replies
22h10m

Definitely a very funny example of their own product's shortcomings

1970-01-01
0 replies
22h1m

If it was any good, the "Negotiation" GPT would quickly get you paying extra for its services.

alphanullmeric
2 replies
19h42m

So what is the selling point for writers and artists now? Being more expensive, much slower and less capable than a machine? At the rate these things are progressing, there’s not going to be any point in keeping them around.

Conscat
1 replies
17h19m

- Guy who doesn't read books or browse artwork.

alphanullmeric
0 replies
17h5m

How much something is valued is determined by how much others are willing to give up for it, and looking at artist salaries it’s already not much. Whether or not I browse artwork would make little difference. Trying to attack me for pointing out that something else does the job better doesn’t improve your value proposition at all.

truakon89
1 replies
22h8m

But where is the option/link to create a GPT? I can't find it

imdsm
0 replies
21h47m

1 pm PST — path /create

rco8786
1 replies
21h28m

Does anyone know if this is just a "native" RAG implementation? Or if it's actually fine tuned models?

minimaxir
0 replies
21h22m

Native RAG, with likely some secret sauce to align the models a bit better: https://platform.openai.com/docs/assistants/tools/knowledge-...

Retrieval augments the Assistant with knowledge from outside its model, such as proprietary product information or documents provided by your users. Once a file is uploaded and passed to the Assistant, OpenAI will automatically chunk your documents, index and store the embeddings, and implement vector search to retrieve relevant content to answer user queries.
mvkel
1 replies
18h5m

This feels like simply a system prompt generator. It gives you a survey to generate a decent system prompt from, then staples that prompt to the front of any interactions with the GPT.

It doesn't seem to be much smarter than a simple "tell me how I should interpret inputs from the user."

LouisvilleGeek
0 replies
14h33m

One of the things I did not care for is setting my plugin hints from the Custom Instructions setting. It seems this will allow for each "GPT" agent to have their own system instructions which will be quite handy.

littlestymaar
1 replies
21h50m

The best GPTs will be invented by the community

We believe the most incredible GPTs will come from builders in the community. Whether you’re an educator, coach, or just someone who loves to build helpful tools, you don’t need to know coding to make one and share your expertise.

“Please work for us for free, while we keep all the product of your work for ourselves like we did with the content we scraped on the internet.”

OpenAI really is next level parasitism.

leobg
0 replies
20h47m

Isn’t that the game they all play? Amazon Marketplace. Apple App Store. Let the guinea pigs run. See which one gets the furthest. Then take away its lunch.

heavyshark
1 replies
21h49m

Any word on when the GPTs will be available?

imdsm
0 replies
21h47m

1 pm PST

gzer0
1 replies
21h57m

Posting from another comment in a different thread, everything that is new from OpenAI developer day:

  - Context length extended to 128k (~300 pages).
  - Better memory retrieveal across a longer span of time
  - 4 new APIs: DALLE-3, GPT-4-vision, TTS (speech synthesis), and Whisper V3 (speech recognition).
  - GPT-4 Turbo, a more intelligent iteration, confirmed as superior to GPT-4.
  - GPT-4 Turbo pricing significantly reduced, about 3 times less expensive than GPT-4. Input and output tokens are respectively 3× and 2× less expensive than GPT-4. It’s available now to all developers in preview.
  - Improved JSON handling (via JSON mode) and function invocation for more sophisticated control.
  - Doubled rate limits with the option to request increases in account settings.
  - Built-in retrieval-augmented generation (RAG) and knowledge current as of April 2023.
  - Whisper V3 to be open-sourced and added to the API suite.
  - Copyright Shield initiative to cover legal fees for copyright-related issues.
  - Ability to create your own, custom "GPTs".
  - Assistants API and new tools (Retrieval, Code Interpreter)
  - 3.5 Turbo 16k now cheaper than old 4k. 0.003c per 1k in / 0.004c per 1k out.
pvg
0 replies
21h47m
glitchc
1 replies
21h49m

The cynical me thinks the point of calling these GPTs is a ploy to trademark the term "GPT".

leobg
0 replies
20h54m

Thought the same thing.

dongobread
1 replies
21h8m

I don't think the target market for this is people looking for extremely knowledgeable LLMs that can handle deep technical tasks, given that you can't even finetune these models.

I'd guess this is more of an attempt to poach the market of companies like character.ai. The market for models with a distinct character/personality is absolutely massive right now (see: app store rankings) and users are willing to spend insane amounts of money on it (in part because of the "digital girlfriend" appeal).

crooked-v
0 replies
20h24m

digital girlfriend

The ban on "adult themes" is part of the reason people use services other than OpenAI for that kind of thing in the first place.

Vfiorx
1 replies
21h21m

I don’t understand why more people aren’t creating digital doubles of their brain..? You train your own LLM for practically free and have your own digital double to maximize any and all productivity. Why is there not more of this?

rocmcd
0 replies
20h11m

Try it and let us know how it works out?

I think you'll find that "maximizing any and all productivity" is both harder than you think and not really something worth striving for.

Plus as neat as these advancements are, they are nowhere nearing the ability to create a "digital double of your brain" of yourself. There is a vast ocean from where we are are today to the possibility of replicating a brain digitally (if it's even physically possible).

ChrisArchitect
1 replies
21h42m

[dupe]

More discussion over here: https://news.ycombinator.com/item?id=38166420

minimaxir
0 replies
21h32m

HN generally allows multiple related announcements for keynotes such as this.

thih9
0 replies
20h21m

Is this going to be openai’s moat?

E.g. one popular comment in the submission about twitter’s new “edgy ai” was that it could be reimplemented as a chatgpt prompt[1]. Looks like this is even more relevant now.

[1]: https://news.ycombinator.com/item?id=38148845

thereisnoself
0 replies
18h40m

What's the file upload limit? I can see many junior roles being replaced if a large corpus of data can be uploaded.

smcleod
0 replies
6h21m

More non-standard, closed source, vendor lock in.

singularity2001
0 replies
21h2m

"you can now create custom versions of ChatGPT"

how? login opens unrelated tab.

Found it:

https://chat.openai.com/gpts/discovery -> create own ->

https://chat.openai.com/gpts/editor

sidcool
0 replies
13h42m

I didn't understand this well enough to appreciate it. What's the use cases and how about privacy? How's it different to regular GPT 4?

runjake
0 replies
21h54m

https://archive.ph/fEp7m

For others like me that are getting errors accessing the page.

rickcarlino
0 replies
21h48m

I’m very excited about all the new developments but must say that they really dropped the ball on marketing this one. The feature name is ambiguous and unsearchable.

rajnathani
0 replies
8h21m

This is just slid into the last paragraph, but it is a pretty big deal as this knowledge cutoff date is now past ChatGPT’s initial launch date (thus OpenAI has likely sort of figured out as to how to exclude crawled text data generated by itself and other LLMs on the internet):

Finally, ChatGPT Plus now includes fresh information up to April 2023.
partiallypro
0 replies
13h25m

I'd love to have a GPT for language learning, granted this would also require vocal inputs and audio.

ofermend
0 replies
21h34m

Excited about GPT4-Turbo and longer sequence lengths. Looking forward very much for faster inference. We just released Vectara's "Hallucination Evaluation Model" (aka HEM) today https://huggingface.co/vectara/hallucination_evaluation_mode..., with a leaderboard: https://github.com/vectara/hallucination-leaderboard GPT-4 was already in the lead.

Looking forward to seeing GPT4-Turbo there soon.

minimaxir
0 replies
22h6m

The GPT Store will prove to be an interesting moderation and quality control experiment for OpenAI. Apple/Google have spent a lot of time and money on both of those things and they still have issues, and that's not even accounting for the fact that AI growth hackers will be the primary creators of GPTs. And a revenue sharing agreement will provide even more incentive to do the traditional App Store marketing shennanigans.

m3kw9
0 replies
20h40m

So basically selling prompts but openai keeps the prompt a secret. Is that it?

itissid
0 replies
20h2m

Interesting. They are taking the pain out of two things

1. RAG based infra building for people(regular joes).

2. Compartmentalize or Work around context lengths by introducing threads.

ilaksh
0 replies
21h55m

I can't access the GPTs stuff. I haven't actually got the last update either with the combined models or anything.

hospitalJail
0 replies
21h26m

This is sooo nice. What is everyone using for theirs? Here is mine

"Be brief, give 10 answers, give probabilities that each answer is the best/most correct"

golol
0 replies
33m

I think "GPTs" is peak bitter lesson

fudged71
0 replies
22h12m

Poe has done a great job in this space, quite a large marketplace of existing bots. I'm excited to see what it can do with the extra vision, D•ALLE, and Code Interpreter models.

emadabdulrahim
0 replies
20h10m

When is the updated UI and feature set for ChatGPT rolling out?

duxup
0 replies
22h0m

Is this just the role and content type data being set as you might with their dev tools?

dang
0 replies
21h38m

Related ongoing threads:

New models and developer products - https://news.ycombinator.com/item?id=38166420

OpenAI DevDay, Opening Keynote Livestream [video] - https://news.ycombinator.com/item?id=38165090

colesantiago
0 replies
22h10m

Awesome, I've been waiting for something like this.

It looks like we are moving away from apps to web GPTs, this looks like chatbot interface is here to stay and 'AI' is now the default interface that is to be expected.

I also don't need to spend lots of money on a developer to test my ideas out, this is great for product validation, I look forward to playing with GPTs.

I can see writing, travel and other bots being more enhanced and more powerful, hopefully existing startups will adapt to this change.

Exciting and interesting times!

caglaroktay
0 replies
18h11m

I don't get where is a place for chatGPT plugins now. are they gonna allow plugins enabled in the new gpts now?

cagataycali
0 replies
14h21m

Basically the OpenAI invented TinyAI.id (4 months later)

cafxx
0 replies
21h43m

Would be nice also if they fixed the ubiquitous "network errors" that happen approximately every single time...

breadsniffer
0 replies
17h55m

GPTs replacing AI startups is what happens when they are pushed to ship as fast as possible by VCs and YC. What do they end up shipping? Very small +1 features that are already on OpenAI's roadmap.

bparsons
0 replies
21h47m

Hasn't this been around for a while?

bluecrab
0 replies
21h19m

Startup funeral.

bertil
0 replies
17h26m

What is your impression of how respectful of their ecosystem this was? I saw many “OpenAI killed my start-up”-type reactions. Still, it feels like they are trying to handle obvious use cases in the ecosystem and make things easier to integrate canonically and more transparently. And to handle legal concerns. It was their first ecosystem conference.

Then again, iOS felt like a very promising ecosystem when the iPhone 4 came out; I feel like it turned into an addendum/duplicate of the web rather than something genuinely original. Android seems more vivid, but not much more.

b3nji
0 replies
19h54m

Forgive my ignorance, isn't this various prompts that OpenAi allow you to store, and quickly use?

anonymouse008
0 replies
20h50m

Is this scary? They said revenue share - it sounds like a streaming platform software licensing model. That sounds like getting paid much less than 70%.

anonu
0 replies
20h36m

A couple dozen startups just died.

ahmedfromtunis
0 replies
22h11m

Any new tech that has the potential of advancing the human species I am excited for. And this is one of these.

The challenge is how to adapt to create new opportunities.

adkandari
0 replies
9h21m

Well the market is pretty obvious. Example - Fitness Influencer on Instagram can be your personal coach,i.e. the AI clone of the Fitness Influencer you can subscribe to at a fairly low price, as personal consultations are expensive and physically limited. So people with distributions and expertise will train their GPTs, make them accessible at affordable prices. good for creators, it's passive income source and good for consumers, affordable service/education.

Y_Y
0 replies
21h49m

Fuck that, release your models and let the "community" (of unpaid volunteers) freely use and own what they create

Vfiorx
0 replies
21h23m

Make and train your own LLM and you can literally duplicate your brain power and productivity. For almost free. Where’s everyone’s digital doubles?

TheCaptain4815
0 replies
12h9m

So disappointed these only allow for one agent/model. Was hoping for an autogen clone + GUI pretty much. Not very useful with single agents

SheinhardtWigCo
0 replies
17h23m

Without getting into specifics, I operate a ChatGPT plugin with a lot of users.

It sounds like the GPT Store will solve some of the rough edges with plugins as they exist today, but I'm not interested.

OpenAI has been a horrible "partner" to work with. The team that owns this product is terribly under-resourced. You have no hope of reaching a human unless you have an inside contact. Support tickets are handled by GPT; policies are arbitrarily enforced; and major bugs in the API and documentation are simply ignored.

There's an amazing opportunity there for user acquisition, obviously, but do you really want to put your hand in the pain box?

Sai_
0 replies
4h15m

Custom GPTs are fine but they’re sort of useless. Since anyone wanting to use a custom GPT has to have a ChatGPT Plus account, they themselves can spin up an even more targeted, even better prompt than anything you could come up with and use that instead.

Unless your custom gpt provides some special bells and whistles through functions or APIs, your custom gpt ideas are going to r copied by someone who can spin up those special bells and whistles and API calls and cut you out from the middle.

RugnirViking
0 replies
6h9m

why would these be any better than asking regular chatgpt the same thing?

I've had the same problem with plugins, I was excited by the idea but looking through the store theres nothing remotely useful or interesting.

I don't want to hate on them, id love to hear about interesting or useful usecases if anyones found use for them

Racing0461
0 replies
21h35m

Barrier to entry for commercial or useful GPTs/Plugins/"Agents" is almost non-existant since its just a str.concat(hiddenprompt, user_prompt), the secret sauce (ie the weights, chat timeout and context length) are already generated/limited by OpenAI and they already have the content moderation/"hr dept" baked in at the weights level. So even if one was to create a "story writer helper" GPT, i don't see how it would be of any value generating new, unique and interesting content other than the prompt recipes we already have on reddit/r/chatgpt (heres 1000 prompts for every use case) that creates netflix like plots (inclusively diverse casting across ethnicities and orientations, socially conscious storylines, modern jargon-filled dialogue, themes of empowerment, progressive characters, and non-traditional relationship dynamics).

This will most likely be like the google play store with a 99% of GPTs being a repackaged public prompt.

JCharante
0 replies
21h3m

ChatGPT Plus now includes fresh information up to April 2023

I'm so happy; I can finally ask questions about expo and trpc and get fresh answers. I confirmed this by asking chatgpt about the superbowl winners in 2022 & 2023.

Implicated
0 replies
22h7m

Your access to custom GPTs isn’t ready yet. We’re rolling this feature out over the coming days. Check back soon.

Go to ChatGPT

Sad. Have a real-world use case ready to go and a plus account.

Dowwie
0 replies
21h10m

Does this summarize to Pre defined custom instructions and workflows? What categories of fine tuning are associated with this work?