Welcome, fellow readers, to this week’s AI news update by Technews Provider. Before we delve into today’s topics, take a moment to comment and share your favorite AI model with us. Now, let’s get started with the latest developments.

Nvidia Launches New Hgx H200 System


Nvidia recently announced the launch of a new system called HGX H200 that supercharges their Hopper AI computing platform. It offers faster and larger HBM3E memory to accelerate generative AI and high performance computing. The H200 delivers 141GB memory at 4.8 terabytes per second, more than double what the A100 offers. It is targeted for shipping in Q2 2024 primarily for large corporations training foundational models or wanting generative AI data centers.

Langchain Expands Microsoft Collaboration


Langchain posted about expanding its collaboration with Microsoft, but without many specifics. Langchain’s tools help build natural language discovery experiences. The post highlights how various companies use Langchain to augment offerings with premium features and new revenue streams. This hints at potential integrations between Langchain’s SaaS product Lsmith and Microsoft Azure and other tools.

Google Bard Opens to Teens


Google Bard, previously restricted to adults, now allows teen users with eligible Google accounts. New features include generating charts from tables or data in prompts, better equation rendering, and understanding text input for graphs. This expansion focused on homework help allows teens to leverage Bard for school assignments.

Exploring GPTs


Simon Willison wrote an excellent blog post exploring GPTs. He highlights specifics he likes about their GPT offering and examples of great GPT applications created by others. The post offers useful insights whether you’re new to OpenAI’s GPTs world or an experienced developer.

Meta Launches Video Emu Edit


Meta AI introduced Video Emu Edit which generates high quality AI videos. Unlike other video generation tools, it can also make image edits like background removal. The capabilities resemble Google Pixel’s Magic Eraser. Strangely, Meta didn’t release the model weights publicly. But the paper and code are available.

New Technique – Zip Lora


Google Research proposed a new AI technique called Zip Lora to effectively merge two neural net models. Comparisons show it retaining style characteristics better than other approaches when combining a content model and style model. This benefits creative applications like AI image generation. The paper code has already been replicated for use with Stable Diffusion and other lora models.

Inflection Launches Second Best Model


Inflection, the startup behind personal assistant app Pi, announced their new Inflection 2 model. They claim it’s the “second best model in the world.” It shows strong improvements in factual knowledge, stylistic control, and reasoning over the original Inflection model. Benchmarks compare it favorably to Palm 2.0, avoiding direct comparison to Claude which likely shows greater gaps.

Economics-Focused Model from Stable LM


An AI researcher, ACR, fine-tuned Stable Diffusion’s 3 billion parameter Stable LM on the Economics-focused DataForge dataset. Despite its smaller size, this model demonstrates accurate economics knowledge and good performance on relevant questions. It highlights the viability of smaller customized models versus gigantic generalized ones. The model and training code are available open source.

Introducing COG VLM


A new Chinese multimodal model, COG VLM, offers 17 billion parameters with 10 billion for vision and 7 billion for language. Benchmark tests show it surpassing models like Google’s PaLM and DeepMind’s Perceiver IO, demonstrating state-of-the-art performance compared to other vision-and-language models like Quintesse’s QUVL. The model weights are available to use but require capable GPUs for inference. Still, it’s an impressive open source option moving forward.

Faster Inference with Look Ahead Decoding


Researchers shared a new parallel decoding algorithm called look ahead decoding to accelerate large language model inference. Comparisons show dramatic speedups over autoregressive decoding without losing output quality or using any additional model tuning or data. This technique can help offset the otherwise costly computational needs of large models. Code is already available for experimenting.

How to Monetize AI Knowledge


With progress in generative AI capabilities, focus shifts towards real world usage and applications. Startup Defog received $2.2 million to build tailored large language models for data analysis, identifying the domain specificity opportunity.

No-code GPT builder builder.io monetized their template showing one successful method. Overall, creators should think vertically, solving industry or niche problems rather than generalized tools.

Leave a Reply