return to table of content

Gemini Flash

xianshou
28 replies
23h40m

Looking at MMLU and other benchmarks, this essentially means sub-second first-token latency with Llama 3 70B quality (but not GPT-4 / Opus), native multimodality, and 1M context.

Not bad compared to rolling your own, but among frontier models the main competitive differentiator was native multimodality. With the release of GPT-4o I'm not clear on why an organization not bound to GCP would pick Gemini. 128k context (4o) is fine unless you're processing whole books/movies at once. Is anyone doing this at scale in a way that can't be filtered down from 1M to 100k?

Workaccount2
16 replies
23h32m

With 1M tokens you can dump 2000 pages of documents into the context windows before starting a chat.

Gemini's strength isn't in being able to answer logic puzzles, it's strength is in its context length. Studying for an exam? Just put the entire textbook in the chat. Need to use a dead language for an old test system with no information on the internet? Drop the 1300 page reference manual in and ask away.

ianbicking
10 replies
23h19m

How much do those input tokens cost?

According to https://ai.google.dev/pricing it's $0.70/million input tokens (for a long context). That will be per-exchange, so every little back and forth will cost around that much (if you're using a substantial portion of the context window).

And while I haven't tested Gemini, most LLMs get increasingly wonky as the context goes up, more likely to fixate, more likely to forget instructions.

That big context window could definitely be great for certain tasks (especially information extraction), but it doesn't feel like a generally useful feature.

lxgr
4 replies
22h41m

Is there a way to amortize that cost over several queries, i.e. "pre-bake" a document into a context persisted in some form to allow cheaper follow-up queries about it?

simonw
1 replies
21h28m

They announced that today, calling it "context caching" - but it looks like it's only going to be available for Gemini Pro 1.5, not for Gemini Flash.

It reduces prompt costs by half for those shared prefix tokens, but you have to pay $4.50/million tokens/hour to keep that cache warm - so probably not a useful optimization for most lower traffic applications.

https://ai.google.dev/gemini-api/docs/caching

dragonwriter
0 replies
21h16m

It reduces prompt costs by half for those shared prefix tokens, but you have to pay $4.50/million tokens/hour to keep that cache warm - so probably not a useful optimization for most lower traffic applications

That's on a model with $3.5/1M input token cost, so half price on cached prefix tokens for $4.5/1M/hour breaks even at a little over 2.5 requests/hour using the cached prefix.

inlined
0 replies
22h32m

Though I'm not familiar with the specifics, they announced "context caching"

gcanyon
0 replies
19h39m

Depending on the output window limit, the first query could be something like: "Summarize this down to its essential details" -- then use that to feed future queries.

Tediously, it would be possible to do this chapter by chapter in order to exceed the output limit building something for future inputs.

Of course, the summary might not fulfill the same functionality as the original source document. YMMV

mcbuilder
3 replies
22h41m

That per exchange context cost is what really puts me off using cloud LLM for anything serious. I know batching and everything is needed in the data center, and important for keeping around KVQ cache, you basically need to fully take over machine to get an interactive session to get the context costs to scale with sequence length. So it's useful, but more in the case of a local LLaMA type situation if you want a conversation.

falcor84
1 replies
21h16m

I wonder if we could implement the equivalent of a JIT compilation, whereby context sequences which get repeatedly reused would be used for an online fine-tuning.

neverokay
0 replies
7h18m

It makes building any app that requires generous user prompting impossible to build for regular developers (cloud pricing).

$20 hosting can serve thousands of users per month. $20 llm sub services just one person. This is fucking impossible.

bredren
0 replies
21h32m

Can anyone speculate on how G arrived at this price, and perhaps how it contrasts with how OAI arrived at its updated pricing? (realizing it can't be held up directly to GPT x at the moment)

tk90
2 replies
20h32m

Isn't there retrieval degradation with such a large context size? I would still think that a RAG system on 128K is still better than No Rag + 1M context window, no? (assuming text only)

afro88
0 replies
8h52m

Not sure why you've been downvoted. Needle in a haystack testing exists for a reason

tulip4attoo
1 replies
23h23m

You don't really use it, right? There's no way to debug if you're doing it like this. Also, the accuracy isn't high, and it can't answer complicated questions, making it quite useless for the cost.

thefourthchime
1 replies
22h27m

I tried to use the 1M tokens with Gemini a couple of months ago. It either crashed or responded ___very__ slowly and then crashed.

I tried a half dozen times and gave up, I hope this one is faster and more stable.

neverokay
0 replies
7h14m

Context length isn’t the same as context volume I think. Just input the 1m tokens slower, it’ll still be in context.

mupuff1234
1 replies
22h38m

I think that's a bit like asking why would someone need a 1gb Gmail when 50mb yahoo account is clearly enough.

It means you can dump context without thinking about it twice and without needing to hack some solutions to deal with context overflow etc.

And given that most use cases most likely deal with text and not multimodal the advantage seems pretty clear imo.

tedsanders
0 replies
22h6m

Long context is a little bit different than extra email storage. Having 1 gb of storage instead of 50 mb has essentially no downside to the user experience.

But submitting 1M input tokens instead of 100k input tokens:

- Causes your costs to go up ~10x

- Causes your latency to go up ~10x (or between 1x and 10x)

- Can result in worse answers (especially if the model gets distracted by irrelevant info)

So longer context is great, yes, but it's not a no-brainer like more email storage. It brings costs. And whether those costs are worth it depends on what you're doing.

leetharris
1 replies
22h1m

There's no way it's Llama 3 70b quality.

I've been trying to work Gemini 1.5 Pro into our workstream for all kinds of stuff and it is so bad. Unbelievable amount of hallucinations, especially when you introduce video or audio.

I'm not sure I can think of a single use case where a high hallucination tiny multimodal model is practical in most businesses. Without reliability it's just a toy.

dibujaron
0 replies
21h22m

Seconding this. Gemini 1.5 is comically bad at basic tasks that GPT4 breezes through, not to mention GPT4o.

treprinum
0 replies
20h14m

GPT-3.5 has 0.5s average first-token latency and Claude3 Haiku 0.4s.

mountainriver
0 replies
4h51m

1m is great for multimodal agentic workflows where you need to keep track of history

killerstorm
0 replies
21h11m

I guess it depends on what you want to do.

E.g. I want to send an entire code base in a context. It might not fit into 128k.

Filtering down is a complex task by itself. It's much easier to call a single API.

Regarding quality of responses, I've seen both disappointing and brilliant responses from Gemini. Do maybe worth trying. But it will probably take several iterations until it can be relied upon.

dragonwriter
0 replies
23h20m

With the release of GPT-4o I'm not clear on why an organization not bound to GCP would pick Gemini.

Price for anything, particularly multimodal tasks that with OpenAI GPT-4o is the cheapest model, that doesn't need GPT-4 quality. GPT-3.5-Turbo — which itself is 1/10 the cost of GPT-4o, is $0.5/1M tokens on input, $1.50/1M on output, with a 16K context window. Gemini 1.5 Flash, for prompts up to 128K, is $0.35/1M tokens on input, and $0.53/1M tokens on output.

For tasks that require multimodality but not GPT-4 smarts (which I think includes a lot of document-processing tasks, for which GPT-4 with Vision and now GPT-4 are magical but pricy), Gemini Flash looks like close to a 95% price cut.

chimney
0 replies
21h21m

Price.

causal
21 replies
1d

1M token context by default is the big feature here IMO, but we need better benchmarks to measure what that really means.

My intuition is that as contexts get longer we start hitting the limits of how much comprehension can be embedded in a single point of vector space, and will need better architectures for selecting the relevant portions of the context.

WhitneyLand
8 replies
16h42m

Limitations of single point in vector space of what dimension?

I’m not sure it’s public knowledge, but it’s an architecture choice. They choose how big to make the embedding dimension.

My point is just that there’s no limitation in principle, it’s just a matter of how they design it and resource constraints.

causal
7 replies
15h19m

Thanks for responding to that point - it's the one most on my mind.

So OpenAI's large embedding model has 3072 dimensions, though in practice far fewer are probably used. Clearly you can't compress 1M tokens down to 3072. Yet those 3072 numbers are all you've got for capturing the full meaning of the previous token when predicting the next one; including all 1M tokens of modifying context.

So perhaps human language is simply never complex enough to need more than 3072 numbers to represent a given train of thought, but that doesn't seem clear to me.

Edit: Since Gemini is relevant here, it looks like their text embedding model is 768 dimensions.

neverokay
3 replies
7h10m

So perhaps human language is simply never complex enough to need more than 3072 numbers to represent a given train of thought, but that doesn't seem clear to me.

Will compute allow that number to go up? Or is that an optimal number?

WhitneyLand
2 replies
5h31m

Definitely has trended upward, there’s no special number. It’s just a matter of how much compute, storage, time to allocate to that part of the architecture.

causal
1 replies
2h43m

Has it? It's my understanding that GPT-3 was 12,288 and GPT-4 went down to 3072.

WhitneyLand
0 replies
2h17m

I’m not aware that internal embedding dimension size has been made public for Gpt4 et al.

In general for models we know its trended upward, but for sure it’d be interesting to know what they’re using now.

WhitneyLand
2 replies
5h35m

Yes but we can distinguish between embedding provided to customers, and internal embeddings. One is optimized for usage in certain types of applications, but the internal embeddings need to be optimized to support long contexts and are not constrained by the customer facing embeddings.

For example, with Open AI I believe it’s known that the internal dimension for Gpt3 was 12,288.

causal
1 replies
2h38m

Are the same embeddings not used internally? I thought they were. Maybe I'm wrong about that.

Mistral uses a 1024 dimension embedding for 8K context. I think the point about trying to capture that rich of a context into a smaller number of dimensions still stands?

WhitneyLand
0 replies
2h21m

Yes definitely, you have a valid concern.

For long contexts this is a key consideration along with what self attention optimizations the model chooses to implement.

They don’t make this public, but we can infer they can’t be using full self attention pairs at 1,000,000 tokens because it scales quadratically and would take Terabytes of RAM.

There are different approaches like sparse attention, and the only way to really know how well their choices work is to test it.

dragonwriter
6 replies
23h12m

1M token context by default is the big feature here IMO, but we need better benchmarks to measure what that really means.

Multimodality in a model That's between 4-7% the cost per token of OpenAI’s cheapest multimodal model is an important feature when you are talking about production use and not just economically unsustainable demos.

refulgentis
4 replies
23h2m

In preview, can't be used in production, they already rug-pulled people building on Gemini w/r/t cost and RPM, and they're pointedly not putting any RPM or cost on the page. (seriously, try finding info on cost, RPM, or release right now, you're linked in circles.)

Agree on OpenAI multimodal but it's sort of a stilted example itself, it's because OpenAI has a hole in its lineup - ex. Claude Haiku is multimodal, faster, and significantly cheaper than GPT 3.5.

dragonwriter
1 replies
22h56m

they're pointedly not putting any RPM or cost on the page

360 RPM base limit, pricing is posted.

seriously, try finding info on cost, RPM, or release right now,

I wasn't making up numbers, its on their Gemini API pricing page: https://ai.google.dev/pricing

refulgentis
0 replies
22h49m

Nice, thanks (btw, I didn't think you were making it up, it was in the keynote!)

causal
1 replies
22h55m

+1 on Haiku being oft overlooked.

verdverm
0 replies
22h48m

Shows the power of the brand and the limit of names consumers will recall long term

"Who are the biggest soda or potato chip makers?"

leetharris
0 replies
22h3m

The problem is that even 1.5 Pro seems completely useless for long context multimodal stuff.

I have tried it for so many use cases in video / audio and it hallucinates an unbelievable amount. More than any other model I've ever used.

So if 1.5 Pro can't even handle simple tasks without hallucination, I imagine this tiny model is even more useless.

sojuz151
1 replies
6h40m

My intuition is that as contexts get longer we start hitting the limits of how much comprehension can be embedded in a single point of vector space, and will need better architectures for selecting the relevant portions of the context.

We are dealing with multi-headed attention, therefore we have multiple points per token. You can always increase the number of heads or the size of the key vector.

causal
0 replies
1h50m

The token embedding is what ultimately gets nudged around by the heads though, right? The key vector just relates to the context size, not the token embedding size, afaik.

shoelessone
1 replies
9h48m

My intuition is that as contexts get longer we start hitting the limits of how much comprehension can be embedded in a single point of vector space, and will need better architectures for selecting the relevant portions of the context.

Is it possible to explain what this means in a way that somebody only roughly familiar with vectors and vector databases? Or recommend an article or further reading on the topic?

causal
0 replies
1h38m

So most of my understanding comes from this series, particularly the last two videos: https://www.3blue1brown.com/topics/neural-networks

Essentially each token of a text occupies a point in a many-dimensional model that represents meaning, and LLMs predict the next token by modifying the last token with the context of all the tokens before it. Attention heads are basically a way of choosing which prior tokens are most relevant and adjusting the last token's point in vector-space accordingly.

refulgentis
0 replies
23h8m

Yeah it's not very good in practice, you can get a halfway decent demo out of it ("look I gave it 6.5 harry potters and it made an SVG map connecting characters with annotations!!"...some of the characters...spare annotations...cost $20). Just good enough to fool you a couple times when you try to make it work 10 times.

cs702
14 replies
20h23m

We're witnessing a race to the bottom on pricing as it's happening. Competition based solely or mainly on pricing is a defining characteristic of a commodity market, i.e., a market in which competing products are interchangeable, and buyers are happy to switch to the cheapest option for a given level of quality.

There's an old saying that if you're selling a commodity, "you can only be as smart as your dumbest competitor."

If we want to be more polite, we could say instead: "you can only price your service as high as your lowest-cost competitor."

It seems that a lot of capital that has been "invested" to train AI models is, ahem, unlikely ever to be recovered.

Aloisius
5 replies
20h18m

Price competition isn't limited to commodities.

cs702
4 replies
20h15m

I never said it was.

Aloisius
2 replies
19h59m

Then why imply that it is a commodity because they (partly) compete on price?

Fungibility is the defining characteristic of commodities. While these products can be used to accomplish the same task, we're not near real fungibility yet.

cs702
1 replies
6h47m

Products that are fungible compete on price (what else?). Chat-with-AI services that have similar performance are pretty fungible today. Switching from one to the other is... remarkably easy. The moment Gemini Flash's competitors start losing customers they will lower their prices to remain competitive.

Aloisius
0 replies
1h11m

Lots of products besides commodities will lower their prices to remain competitive. General Electric keeps their prices competitive with Pratt & Whitney, but that doesn't make jet engines a commodity.

This product from Google clearly competes on price/performance ratio, speed and of course, brand.

EGreg
0 replies
20h9m

You never said it wasn't, either :-P

rmbyrro
2 replies
20h1m

Google figured it can't beat OpenAI technically, but they sure know they can beat them financially and infrastructurally.

tfsh
0 replies
18h54m

In technical terms they're on par. But you're correct about Google being able to bet on their decades of infrastructure

__loam
0 replies
19h39m

Is infrastructure and scale not an expression of technical ability? It should have been obvious that Meta and Google would bury a tiny company with less than 1000 employees given the amount of capital they can leverage for compute, talent, and data. Google literally invented GPT.

Delmololo
1 replies
20h14m

But the race to the bottom has an opposition right?

So people expect to see a return of investment which will create the bottom of pricing (at least as soon as the old money ran out)

I'm also curious if AI is a good example because ai will become fundamental. This means if you don't invest you might be gone therefore it's more like a fee in case the investment would not pan out.

__loam
0 replies
19h37m

Supply and demand determines price, not the hopes and dreams of investors.

r0m4n0
0 replies
20h14m

Google is building on top of and integrated with their cloud offerings. Having first party solutions like this gives big cloud customers an easy way to integrate. For Google it’s just another tool in the chest that gets sold to these big enterprises. Many go all in on all the same cloud products. Also the models are only the building blocks. Other cloud products at Google will be built with this and sold as a service

Not so sure about Open AI though…

daghamm
0 replies
20h14m

Is this race to the bottom or just Googles new TPUs being extremly efficient?

__loam
0 replies
19h42m

You're saying the quiet part out loud here.

kherud
11 replies
21h32m

Now that context length seems abundant for most tasks, I'm wondering why sub-word tokens are still used. I'm really curious how character-based LLMs would compare. With 2 M context, the compute bottleneck fades away. I'm not sure though what role the vocabulary size has. Maybe a large size is critical, since the embedding already contains a big chunk of the knowledge. On the other hand, using a character-based vocabulary would solve multiple problems, I think, like glitch tokens and possibly things like arithmetic and rhyming capabilities. Implementing sub-word tokenizers correctly and training them seems also quite complex. On a character level this should be trivial.

darby_eight
4 replies
20h55m

On a character level this should be trivial.

Characters are not the semantic components of words—these are syllables. Generally speaking, anyway. I've got to imagine this approach would yield higher quality results than the roman alphabet. I'm curious if this could be tested by just looking at how LLMs handle English vs Chinese.

inbetween
3 replies
20h30m

The minimal semantic parts of words are morphemes. Syllables are phonological units (roughly: the minimal unit for rhythmic purposes such as stress, etc)

darby_eight
2 replies
20h13m

Only in languages that have morphemes! This is hardly a universal attribute of language so much as an attribute of those that use an alphabet to encode sounds. It makes more sense to just bypass the encoding and directly consider the speech.

Besides, considering morphemes as semantic often results in a completely different meaning than we actually intend. We aren't trying to train a chatbot to speak in prefixes and suffixes, we're trying to train a chatbot to speak in natural language, even if it is encoded to latin script before output.

inbetween
1 replies
8h16m

That's technically wrong. Every language has morphemes for the simple reason that every word is at least one morpheme. `cat` is a morpheme. `cats` is two morphemes (cat-s).

(The point about semantics is also technically wrong. You would first need to specify your view of semantic compositionality before such a point can be evaluated, but the usual views of semantics don't have any such consequence.)

darby_eight
0 replies
1h28m

Every language has morphemes for the simple reason that every word is at least one morpheme.

Sure, if you define "morpheme" as a collection of syllables that's meaningful to people using alphabetic script. I don't see any benefit to this compared to working with syllables directly, which is a meaningful concept regardless of the script used to encode them.

AaronFriel
3 replies
21h19m

The attention mechanism is vastly more efficient to train when it can attend to larger, more meaningful tokens. For inference servers, a significant amount of memory goes into the KV cache, and as you note, to build up the embedding through attention would then require correlating far more tokens, each of which is "less meaningful".

I think we may get to this point eventually, in the limit we will want multimodal LLMs that understand images and sounds down to the pixel and frequency, and it seems like for text, too, we will eventually want that as well.

yk
1 replies
20h32m

a significant amount of memory goes into the KV cache

Is there a good paper (or talk) how inference looks at scale? (Kinda like ELI-using-single-gpus)

AaronFriel
0 replies
55m

The PagedAttention paper is a good starting point as it represents the first major open source inference engine that had "pretty good" batch performance for transformers.

https://arxiv.org/pdf/2309.06180

thomasahle
0 replies
19h56m

Maybe you could just use a good-old 1D-CNN for the bottom 3-4 layers. Then the model has been able to combine characters into roughly token length chunks anyway.

Just make sure to have some big MLPs at the start too, to enrich the "tokens" with the information currently stored in the embedding tables.

novaRom
0 replies
21h12m

In AI music generation we have much better results with large vocabulary sizes of 10^6 order, my uneducated guess is that's because transformers are not universal pattern recognizers, they can catch patterns on a certain granularity level only.

joaogui1
0 replies
21h12m

I would say 2 big problems are:

1. latency, which would get worse if you have to sequentially generate more output

2. These models very roughly turn tokens -> "average meaning" on the embedding layer, followed by attention layers that combine the meanings, and feed forward layers that match the current meaning combination to some kind of learned archetype/prototype almost. When you move from word parts to characters all of that becomes more confusing (what's the average meaning of a?) and so I don't think there are good enough techniques to learn character-based models yet

numbers
9 replies
22h53m

It's ironic that when you ask these AI chatbots what their own context size is, they don't know. ChatGPT doesn't even know about 4o existing in 4o.

SoftTalker
6 replies
22h49m

Does a monkey know that it is a monkey?

verdverm
5 replies
22h45m

I think "yes" is the most likely answer here

animals have a lot more intelligence than they typically get attributed

Tool use, names, language, social structure and behavior, even drug use has been shown across many species

chaorace
4 replies
22h5m

Okay, but the monkey doesn't know that it knows that it's a monkey.

verdverm
2 replies
19h30m

are you sure?

Many animals recognize themselves and their species as separate concepts

keefle
1 replies
11h21m

He meant something more meta I believe. Knowing you are a monkey is one thing, and knowing that you know you are a monkey is a another thing. It's about being cognisant of the fact that there is something called knowledge and you have it

chaorace
0 replies
3h34m

Precisely. To put it more concretely: it is no small feat to grasp the abstract distinction between known-knowns, known-unknowns, unknown-knowns, and unknown-unknowns. They do not know what they do not know.

fourthark
0 replies
16h27m

How do you know?

simonw
0 replies
20h51m

The models didn't exist when their training data was collected.

But... that's not really an excuse any more. Model vendors should understand now that the most natural thing in the world is for people to ask models directly about their own abilities and architecture.

I think models should have a final layer of fine-tuning or even system prompting to help them answer these kinds of questions in a useful way.

advisedwang
0 replies
22h19m

Ask a human how many neurons they have. Hell, over history humans haven't even consistently understood that the brain is where cognition happens.

refulgentis
7 replies
23h9m

It's absolutely unconscionable that Gemini Ultra got memory-holed. I can't trust anything that Google says about benchmarks.

It seemingly existed only so in December 2023, Gemini ~= GPT-4. (April 2023 version) (on paper) ("32-shot CoT" vs. 5-shot GPT-4)

summerlight
5 replies
22h43m

Gemini Ultra is 1.0 with 8k window. This is 1.5 with 1m window. Your feeling is based on incorrect assumption.

anoncareer0212
4 replies
21h50m

And?

You're replying to a comment that points out Gemini Ultra was never released, wasn't mentioned today, and it's the only model Google's benchmarking at GPT-4 level. They didn't say anything about feelings or context window.

summerlight
1 replies
21h40m

You're replying to a comment that points out Gemini Ultra was never released

What are you even talking about? How do you know it's memory-holed if you haven't used it? The API is not GA, but the model can be used through the chatbot subscription. GP is talking about their lack of trust on Google's claim of 1M context token, not GPT-4 level reasoning. If you're expect GPT-4 level performance with cost-efficient models, that's another problem.

refulgentis
0 replies
15h27m

Idk why you're so aggro, they're right, I meant the GPT-4 level reasoning

dontreact
1 replies
21h43m

Gemini Ultra has been available for people to try via Gemini Advanced (formerly Bard) for a few months

cma
0 replies
20h10m

It says it may fall back to a worse model under load and there is no way to tell which you are getting. I think chatgpt has at times done something similar though.

CSMastermind
0 replies
22h42m

Anyone who uses both products regularly will tell you that Gemini Advanced is far behind GPT-4 and Claude 3 Opus.

Pretending that they have a model internally that's on par but they're not releasing it is a very "my girlfriend goes to another school" move and makes no sense if they're a business that's actually trying to compete.

mrcwinn
4 replies
15h20m

I will say Google certainly has the better branding team. I like Gemini, Gems, and so on. “ChatGPT” is quite a clunky mess. OpenAI just feels like a faceless entity.

All things that could change but seems late in the game at this point. They certainly had the money to be more creative as they came to market.

ukuina
2 replies
14h19m

OpenAI desperately needs a marketing consult.

"GPT4o"? Seriously?

Even "GPT4 Omni" is easier in conversation, and that's what the "o" stands for!

They severely underestimate the number of casual users they have.

zarzavat
0 replies
12h43m

OpenAI doesn’t need marketing because everybody knows who’s the best. Same reason that if I asked you what’s the best violin you would say Stradivari, even though you’ve never seen an ad for one.

OpenAI could call their model the “[poo emoji] 5000” for all the difference it would make.

cpeterso
0 replies
12h40m

Goggle “Gemini” is a much better product name than any name OpenAI has, but the Gemini product family could use some structure:

  Gemini Advanced (“with Ultra 1.0”)
  Gemini Ultra
  Gemini Pro
  Gemini Flash
  Gemini Nano-1
  Gemini Nano-2

precompute
0 replies
9h56m

"ChatGPT" is like "Google". "Gemini" is never replacing that.

cynicalsecurity
3 replies
23h41m

Feed 1 mln tokens

@

Get blocked by some silly overly sensitive "safety" trigger

gpm
2 replies
23h7m

Last I checked you could disable the safety triggers as an API user with gemini (which doesn't alleviate your obligation to follow the TOS as to the uses of the model).

VS1999
1 replies
22h58m

I'm not working with a company that can just write in the ToS "we can do anything we want. lol. lmao" and expect me to follow it religiously. Corporations need less control over speech, not more.

zxexz
0 replies
11h4m

I mean, you are using a service they're providing - many would say they they're exercising their rights by gatekeeping how it's used. There are pretty good models out there you could use however you want for your own purpose, whatever it is. I occasionally fine-tune Mixtral on HN posts+comments and chat with comments. An emergent Dang actually once told me off for flame-baiting a free speech comment.

zone411
1 replies
16h47m

15.3 On NYT Connections benchmark:

GPT-4 turbo (gpt-4-0125-preview) 31.0

GPT-4o 30.7

GPT-4 turbo (gpt-4-turbo-2024-04-09) 29.7

GPT-4 turbo (gpt-4-1106-preview) 28.8

Claude 3 Opus 27.3

GPT-4 (0613) 26.1

Llama 3 Instruct 70B 24.0

Gemini Pro 1.5 19.9

Mistral Large 17.7

-----> Gemini 1.5 Flash 15.3

Mistral Medium 15.0

Gemini Pro 1.0 14.2

Llama 3 Instruct 8B 12.3

Mixtral-8x22B Instruct 12.2

ukuina
0 replies
14h17m

So many high-performing, yet poorly-named OpenAI models in that list.

webprofusion
1 replies
15h26m

Uh guys, yeah.. Adobe are on the phone saying something about trademark infringement, apparently Flash is something else? I don't know, I've never heard of it..

exodust
0 replies
11h28m

Interestingly, until your comment I hadn't made any connection with old Flash, even though I spent hundreds of hours making Flash games.

This suggests names don't stick around for long and can be re-used. Perhaps Google could bring back "Buzz" and "Wave" since enough time has passed!

stan_kirdey
1 replies
15h34m

I've been diligently trying to use Gemini 1.5 Pro, and it is not even on the level of Llama3-70B. I really hope Gemini improves, even if it gets reduced context length.

ukuina
0 replies
14h15m

FAIR really swung for the fences with Llama3. It's a very impressive model, but the 8K context size is quite limiting for most use-cases.

nojvek
1 replies
21h29m

Will wait for Meta to release Flash equivalent weights.

Multi-Modal modals running offline on mobile devices with millisecond latencies per token seems the future.

Where is Apple in all of this. Why is Siri still so shit?

visarga
0 replies
20h43m

Apple made a deal with OpenAI for GPT4o, the stakes are indeed high, can't be caught with pants down. iPhone needs to remain the premium brand.

michaelteter
1 replies
13h46m

If Gemini Flash is just faster Gemini, then I would say that bad answers aren't better when delivered more quickly.

I ran Gemini Pro side by side with ChatGPT 4 for a few months on practical coding, systems architecture, and occasional general questions. ChatGPT was more useful at least 80% of the time. Gemini was either wrong or laboriously meandering in reaching a useful answer that it wasn't worth using, in my experience.

Faster isn't what I needed... Maybe it's also "smarter" (more useful) too now?

ganzuul
0 replies
7h59m

Presumably we are defining smartness as doing more with less, so this indicates they have something going on in the latent space which will scale.

CapsAdmin
1 replies
3h16m

I was really fascinated by Gemini 1.5 pro, though it was a bit slow and sort of 70% accurate in my use case.

I have a huge 500k~ tokens and complex niche codebase that I've worked on for many years, largely alone. There are parts of it I wish to refactor but I struggle because it I've become blind to my own code. It also sometimes feels lonely in a way. If it was a game I could at least show my friends, but this project is too abstract.

Gemini missed the mark a few times, especially when asking about more complex things but overall it was useful. That it got things wrong is sort of ok because I knew the codebase well enough to spot those mistakes.

Gemini 1.5 pro gave me a glimpse into what it was like having "someone" understand your whole codebase, hint at areas to improve, etc. A bit like a true copilot or coworker, but for a dream hobby project.

fnordpiglet
0 replies
3h4m

How does it compare to others products in your use case?

simonw
0 replies
20h55m

I upgraded my llm-gemini plugin to provide CLI access to Gemini Flash:

    pipx install llm # or brew install llm
    llm install llm-gemini --upgrade
    llm keys set gemini
    # paste API key here
    llm -m gemini-1.5-flash-latest 'a short poem about otters'
https://github.com/simonw/llm-gemini/releases/tag/0.1a4

quantisan
0 replies
20h35m

Price (input) $0.35 / 1 million tokens (for prompts up to 128K tokens) $0.70 / 1 million tokens (for prompts longer than 128K)

Price (output) $0.53 / 1 million tokens (for prompts up to 128K tokens) $1.05 / 1 million tokens (for prompts longer than 128K)

---

Compared to GPT-3.5 Turbo

Input US$0.50 / 1M tokens Output US$1.50 / 1M tokens

objektif
0 replies
19h33m

Does Goog have anything like openai assistant via API? If they had I would definitely give it a try.

nightski
0 replies
22h23m

A lightweight model that you can only use in the cloud? That is amusing. These tech megacorps are really intent on owning your usage of AI. But we must not let that be the future.

eru
0 replies
19h23m

The website talks about a specific benchmark:

Python code generation. Held out dataset HumanEval-like, not leaked on the web

What I find interesting here is that for this particular benchmark _not_ publishing the benchmark is advertised as a feature (instead of as a sign of 'trust me, bro, we have a great benchmark'), and I can understand why. Still these are strange times we live in.

chefkd
0 replies
18h22m

Any links to companies working on a fully local only AI?

alephxyz
0 replies
21h24m

Not very informative. They're selling it as the fast/cheap option but they don't benchmark inference speed or compare it with non-gemini models.

According to https://ai.google.dev/pricing it's priced a bit lower than gpt3.5-turbo but no idea how it compares to it.

ArkimPhiri
0 replies
6h51m

I added Gemini 1.5 Flash to my AI app Semaj AI and its fast indeed, it is also very intelligent compared to Gemini 1.0 Pro.