- The Mahazine
- Posts
- Adobe introduces Firefly Video, Runway Video-to-Video revolution, Open-world video game AI generation
Adobe introduces Firefly Video, Runway Video-to-Video revolution, Open-world video game AI generation
ByteDance ‘Loopy’ image animation tool, Runway Gen:48 AI video competition, Qatar Airways interactive AI experience
Trending AI stories, ads & marketing campaigns 👇
📣 In the news
Grace Ling partnered with AI startup Humane to showcase the Grace Ling x Humane Handaxe Bag at NYFW SS25. The accessory, designed to house Humane's AI-powered pin, was inspired by stone-age tools. This marks Humane's second runway appearance, following their debut with Coperni at Paris Fashion Week last year. However, this latest collaboration seems to have garnered less attention compared to their previous high-profile showcase. The Handaxe Bag was presented alongside Ling's "Neanderthal" collection, blending ancient inspiration with futuristic technology.
GameGen-O is a diffusion transformer model designed for generating open-world video games. The model uses a comprehensive dataset called OGameData, collected from over 100 next-generation open-world games. GameGen-O undergoes a two-stage training process: foundation model pretraining and instruction tuning. The system enables high-quality, open-domain generation of game elements, including characters, environments, actions, and events, with interactive controllability. This innovative approach represents a significant step towards automating and streamlining the game development process, potentially making it more accessible to a wider range of creators.
The Brandtech Group has developed a tool to address biases in AI-generated content. This tool aims to improve the fairness and accuracy of AI outputs in marketing and advertising. It analyzes and adjusts AI-generated content to reduce potential biases based on factors like gender, race, or age. The Brandtech Group's initiative highlights the growing concern about AI biases in the marketing industry. By addressing these issues, the company hopes to create more inclusive and representative AI-generated content for diverse audiences.
Adobe has announced the upcoming release of Firefly Video Model, a generative AI tool for video creation. This new technology will allow users to generate and edit video content using text prompts and AI-powered tools. Firefly Video Model builds upon Adobe's existing Firefly technology for image generation. The tool aims to revolutionize video production workflows by making complex editing tasks more accessible and efficient. Adobe envisions this technology empowering creators to produce high-quality video content with greater speed and creative flexibility.
ByteDance, the parent company of TikTok, has introduced 'Loopy,' a generative AI tool that can animate static images. This technology allows users to bring still images to life by adding motion and animation effects. Loopy represents ByteDance's entry into the competitive field of generative AI, joining other tech giants in developing innovative AI-powered creative tools. The tool demonstrates the growing trend of AI applications in visual content creation and manipulation. Loopy's introduction suggests ByteDance's ambition to expand its influence in the AI-driven creative technology sector.
👀 Creative picks
Runway Gen:48 AI video competition
A unique film competition has taken place this weekend, challenging creators to produce short films between 1-4 minutes in length. Participants were required to include one element from each of the following and other specified categories: natural disaster, graduation, medieval Europe, or underwater. All generative video must be created using Runway, an AI video tool. While Runway is mandatory for video generation, creators could use images from other sources if they own the right. This unique contest showcased the possibilities at the intersection of traditional storytelling and AI-assisted content creation, pushing filmmakers to explore new creative territories within these constraints.
Mango's AI-Driven Teen Campaign
Mango has unveiled an innovative marketing initiative for their limited-edition Sunset Dream collection of its youth line. The process began with capturing real photos of each garment in the collection. These images were used to train a generative AI model to produce new visuals, positioning the garments on digital models. The key challenge was to achieve editorial-quality images that matched the high standards of traditional fashion campaigns while preserving the true essence of the garments and models.
Once the AI generated the initial images, Mango's art team took over to select, retouch, and perfect the visuals in the studio, ensuring the final product was both striking and authentic. Mango also uses an image generative AI platform to help the company's design and product team seek inspiration by looking at different concepts in order to co-create prints, fabrics and garments and seek inspiration for window dressing, architecture and interior design.
Credits: Mango
Qatar Airways' Interactive AI Ad Experience
Qatar Airways has launched a captivating three-minute ad that reimagines a classic romantic comedy storyline. The video features a narrative where a chance encounter between a man and a woman leads to a globe-trotting quest, as the man spends three months traveling on Qatar-chartered flights in search of her after losing her earring.
Adding a unique twist, the campaign invites viewers to become part of the story by using face-scanning software and AI. This interactive element allows audiences to insert their own faces into scenes from the video, creating a personalized and immersive experience. This innovative use of AI not only engages viewers on a deeper level but also enhances the emotional appeal of the brand’s travel offerings.
Credits: Qatar Airways / McCann Worldgroup
📰 Get the mahazine volume #2
Call for contributors: Volume #3 of the mahazine!
We're excited to announce that for our upcoming volume, we’re seeking creative professionals who are innovating with AI or exploring its integration into their creative processes. If you’re passionate about AI in your work, we want to hear from you!
Join us in sharing your insights and experiences. Please fill → this form if you want to participate and get featured!
🐲 Tutorial
Video-to-Video in Runway
Use a text prompt to give your videos a whole new look with Video to Video.
Video generated are currently limited to 16:9 aspect ratio and a max length of 10sec.
Step 1: Upload a video
→ Click “Drop an Image or Video” to upload a video the prompt section.
Important note: Video-to-Video is only availabe in Gen-3 Alpha model.
Step 2: Input your prompt
→ Of course, the more detailed your text prompt, the more impressive the style transformation in Video to Video.
Step 3: Structure settings
→ You can modify the structural consistency between the input and output in settings controls using “Structure transformation”. Lower values (0.00) will result in an output that maintains the structure of the original video. Higher values (1.00) will result in more abstract outputs.
Step 4: Have fun
See you next week for a new tutorial 👋
🦄 Weekly visuals
Submarine dystopia
made by Mahage
💎 Post of the week
One last thing 👇
Did you like this content? Subscribe to get more content like this directly in your inbox.
If you enjoy the mahazine, please spread the word by sharing it with your colleagues, friends, and fellow creatives. Your support in growing our community is invaluable as it allows us to reach more creative minds and continue delivering high-quality content.
How would you rate today's edition? |
Reply