![]() ![]() When people talk about “Generative AI”, they tend to mix these areas together into one big bucket. For this post I am going to ignore everything in this bucket for now. However, this is meant to be a short post versus a book, so please bear with me. Obviously different model architectures and end markets exist for AlphaFold 2 versus self driving cars. I am dramatically over simplifying things in a dumb way by bucketing lots of stuff here. This includes robotics, self driving cars, protein folding, and numerous other application areas. Other (really a very large set of other tech & markets that really should *not* be naturally clustered together). These models allow you to type a prompt to generate an image.ģ. Image generation which includes models like Midjourney, Dall-E, or Stable Diffusion as well as currently simple video approaches and 3D models like NeRF. These are general purpose models like GPT-4 or Chinchilla in which the web (or other source of text / language) are ingested and transformed into models that can do everything from summarize legal documents to be used a search engine or friendly chat bot.Ģ. The AI world is divisible into roughly 3 areas (this is a massive oversimplification of course):ġ. ![]() So updating and publishing now and will undoubtedly be wrong again in a few months). Since then Google has announced entering the market and MSFT announced Bing and other AI integrations. (I originally wrote this post a few months ago and sat on it. ![]()
0 Comments
Leave a Reply. |