heredos

@heredos@heredos.net

computer science student.

Birthdayas old as 2005 but born in 2004
Websitehttps://heredos.net
Batteriesincluded
gendernot installed
conditionlight scratches on the surface, cleaned yesterday evening
colorsorange, see-through shell
flagsred (as in communism is my country)

Location: 48.776396,2.335207

42 following, 26 followers

1 ★ 0 ↺

[?]heredos »
@heredos@heredos.net

@hypolite@friendica.mrpetovan.com maybe they scaled them in order to achieve that. Because all of these big issues arose with the first billion-parameter scale gated models (dall-e, gpt3, stable diff and the like).
But i'm not so sure.

I think it all became known to public when they democratized among non-researchers. When they became products for a wide audience instead of research topics for a bunch of engineers.
But i'm pretty sure they were trained on copyrighted data even before that, it's just that it almost never reached the hands of the people.

It just turns out that to make generative ml good enough that people want to pay for it we also need models too big to not be almost completely opaque.

But then, the fact that we have very little research on model reverse engineering, interpretability and the like, is likely due to big ai labs having absolutely no interest in doing so, for the very reasons you just gave.

    History