Out With the Old In With the New!

Google Stitch Gets UI/UX Queen Status

Everyone's favorite tool can fall from grace the moment something faster, cheaper, or shinier shows up and it did -

Google Labs gave Stitch, its AI design tool a major glow up -

  • Generates a wireframe from a text OR voice prompt

  • Responds well to feedback

  • and it is free!

Because of this Figma's stock dropped 12% the next morning.

On a personal note - UI/UX folk I think you should try to get into product or engineering side of things as well to help build up your roles and responsibilities. Reply to this email for a one-on-one consultation!

How to apply this:

  1. Run a 30-minute Stitch trial on your next feature spec — generate three layout variants, screenshot them, and drop them into your design review. If it saves one async back-and-forth with your design team it has paid for itself (cost: $0).

  2. If your team uses Figma for early wireframing specifically, evaluate whether Stitch can replace that phase while Figma remains for production polish — a two-tool workflow that costs less overall.

  3. Stitch outputs can be exported and imported into Figma as a starting point — use it as a rapid first draft generator, not a replacement for your design system.

  4. For teams without a dedicated designer, Stitch changes the calculus on when to involve design in the process. Early visual alignment is now feasible at the engineering-PM stage, before any design resource is allocated.

  5. Watch Google's I/O 2026 (Google's annual developer conference, typically held in May) for Stitch's enterprise roadmap announcement — whether it gets a paid tier, API access, or deeper Workspace integration will determine its long-term viability.

Story 2

Light Replaces Copper Wires?!

  • Massive Co-Investment: NVIDIA and AMD jointly invested $500M in Ayar Labs (Series E), valuing the startup at $3.75B – a significant move reflecting a shared industry concern.

  • The Copper Problem: The primary bottleneck in AI training clusters isn't GPU compute power, but the speed of data transfer via copper interconnects.

  • Physics Limitations: Copper degrades signals over distance, generates heat, and limits bandwidth, hindering the scaling of AI models.

  • Ayar Labs' Solution: CPO (Co-Packaged Optics):

    • Silicon Waveguides: The core of CPO is using tiny, precisely engineered channels carved into silicon – these are called waveguides. Waveguides are designed to trap and guide light.

    • Laser Encoding: Instead of sending electrical signals, data (in the form of gradient updates) is converted into light pulses using a laser.

    • Light Transmission: These laser pulses travel through the silicon waveguides, carrying the data.

    • Minimal Degradation: Light experiences far less signal degradation than electrical signals over the same distance. The waveguide design minimizes losses.

    • No Heat: Light carries no electrical current, so it doesn't generate heat.

  • TeraPHY Chiplets: Ayar Labs’ chips (TeraPHY) are integrated directly alongside GPUs in the same package.

  • 1,024-GPU All-Photonic Server Rack (2028 Target): Ayar Labs partnered with Wiwynn to develop a rack supporting 1,024 GPUs utilizing photonic interconnects – a critical form factor for future AI training.

  • Market Signal: The NVIDIA/AMD co-investment clearly signals that CPO is transitioning from experimental to a core infrastructure commitment.

Story 3

Before V-JEPA 2.1, Models Turned the World into a Monet

“Rounded Flower Bed (Corbeille de fleurs)” (1876), Claude Monet.

We have all seen bad video camera footage. Whether it is a security cam footage trying to ID a criminal (why is it always the worst quality 😭). Or uploading a video to your laptop. There is a chance it gets pixelated… or monet-ified.

Looking at this painting afar you can tell it is a garden, but who is the women? What type of flowers are those? What kind of dress is she wearing?

Well for vision models, no matter how sharp the video, the model was never trained to hold onto the detail. It always monet-ified it.

Traditionally, training for video models only rewarded scene-level understanding so the the fix was:

  • Measure prediction quality on every token not just the hidden ones

  • Evaluate intermediate layers directly not just the final output.

Suddenly the model had to justify keeping the fine-grained detail at every step.

A relevant real word application:

  • Pinterest gets your vibe digitally. But a physical system reading you in a store couldn't execute on it — it could say "blazer" but couldn't read the structured Celine bag, the gold hardware, the knee-length hem, the whole visual language of what you're actually building toward. V-JEPA 2.1 is what closes that gap.

Why Meta specifically grinded this out:

  • Meta is building physical AI infrastructure — Ray-Ban smart glasses, their robotics research division, AR headsets.

  • Also their entire business is built on understanding what you want precisely enough to serve it back to you. Losing information was never acceptable. Better spatial understanding of the physical world means better data on people...

Real World Applications:

  • Virtual try-on has been a gimmick for fifteen years. This is what makes it actually useful — understanding how fabric sits on your specific frame, where a hem falls, whether a shoulder seam lands right

  • Any app that asks you to take a photo and gets the answer slightly wrong: furniture measuring, skin condition readers, food portion estimators — all quietly get more accurate without anyone knowing why

Can ML engineers implement this?

  • Weights are open source on Meta's GitHub right now

  • You can download it and run the linear probe benchmark on their own task within a day before spending any budget on fine-tuning

We r cooked

Therabot: AI Therapy Cuts Depression Symptoms by 51% in First Proper Clinical Trial

The first RCT (Randomized Controlled Trial — the gold standard for testing medical treatments, where participants are randomly assigned to treatment or control groups) of a fully generative AI therapy chatbot published results in 2026: 210 adults with major depression, generalized anxiety, or eating disorders. The depression group saw a 51% average reduction in symptoms. Anxiety: 31% reduction.

I am not saying this is bad news. I am glad some people have the outlet they otherwise could not get with humans due to trust or money.

But It begs the question, are we finding more comfort in robots than in humans.

And is that a bad thing?

Hope Core 🌱

Great Barrier Reef + McLaren Racing

  • The Great Barrier Reef has lost ~50% of its coral since 1985 — manual diver-led replanting can't scale fast enough to keep up

  • McLaren Racing partnered with the Great Barrier Reef Foundation to build Machine One, a semi-automated coral-seeding robot now in field trials in Townsville

  • Separately, Australian scientists developed a "larval seedbox" — an underwater nursery that boosts coral settlement rates 56x over open-water seeding

  • Together they reframe reef restoration from a conservation problem into a systems engineering problem: the biology is solved, the bottleneck is logistics and deployment at scale

  • Gene-edited, heat-resistant coral fragments were also deployed on the reef in late 2025 — heat resistance + mechanical seeding at scale is the combination that could actually move the needle

Well that is a wrap for this week folks, stay tuned for next week's latest research in AI and tech!

XOXO,

GG

Keep Reading