If you were reading through Twitter (always Twitter, never X) last week, you may have come across the storm stirred up from this article RIP Design Agency : The anti-design agency manifesto. While the title is clickbait mastery, the gist of his article is pretty straightforward:
All things being equal, the efficiencies that AI can bring to workflows will give the agencies that adopt it an advantage over those that do not.
One of the most common questions that I see in the community around Generative AI is whether specific models can be used for professional/commercial work, and what claims of “safe for commercial use” actually mean. These questions, coupled with uncertainty over Copyright law and other licensing issues, have led to a lot of confusion when trying to understand implications of using specific models for projects.
This post provides a non-legal, US-centric overview of Generative AI and claims around commercial safety, and discusses factors to consider when determining whether a model can safely be used for any particular project. It is not meant to be an exhaustive discussion of the topics, but can hopefully serve as a jumping-off point for going deeper in any specific area.
I have had the fortune of being able to travel to Tokyo a bunch of times in my career, and I often get asked for suggestions on places to visit. I figured I would write up a post to make it easier to share.
I am doing some research to find some smaller places (especially Ramen) that I have been to, and will update as find them.
General Payment / Money To start, get a Suica Card, which is basically like a debit card you can put money on and use for a ton of stuff, including the subway. Easiest way to use it is to add it to Apple Pay on your phone, which makes it both convenient to use and to add funds to. You can also use them at pretty much every convenience store.
There has been a lot of excitement over the past week or so with the release of Google’s Gemini 2.5 Flash image model (Nano Banana). The model provides some of the best quality and ways to control the output (maybe the best we have seen thus far).
As usually happens with any big generative AI model release, a fresh wave of clickbaity “Photoshop is dead” posts has followed. This week was no different, though the claims were louder and more frequent, on par with the level of excitement around Gemini 2.5.
This post proposes a framework for thinking about AI and its impact on the creative industry.
I work at Adobe, but these views are my own.
Over the past three years, the conversation around AI in the creative community has centered almost entirely on generative AI (specifically image and video generation). For many creators, generative AI feels threatening because it appears to automate work that was once theirs alone. At the same time, many models are trained on unlicensed content, raising ethical, legal, and fairness concerns.
Adobe has been doing a lot of work over the past year and a half, integrating generative AI functionality inside their tools, with things like Generative Fill in Photoshop, Generative Remove in Lightroom and Generative Extend in Premiere Pro. However, the world of AI is much larger than just Generative AI and video/image generation. Specifically, agentic AI and more general-purpose AI with MCP (such as Claude / ChatGPT) provide a much larger opportunity to help creatives across their entire workflow.