- The Mahazine
- Posts
- 3D hair AI reconstruction, Reddit enhances ad tools, Mondelēz partners with Accenture and Publicis for AI marketing
3D hair AI reconstruction, Reddit enhances ad tools, Mondelēz partners with Accenture and Publicis for AI marketing
Runway funds AI-generated films, James Cameron joins Stability AI company board, Video control in Kling
Trending AI stories, ads & marketing campaigns 👇
📣 Announcement
We just launched the 3rd edition of the mahazine 🔥🔥🔥🔥🔥🔥
More details later in the newsletter 👇
📣 In the news
Researchers develop unified method for 3D hair modeling from single views. The approach works for both braided and un-braided hairstyles, using synthetic training data and 3D Gaussian representation. It achieves state-of-the-art results on complex hairstyles and generalizes well to real images despite synthetic training. The method combines diffusion priors and refinement modules to reconstruct detailed 3D hair shapes and textures.
Reddit launches new AI-powered tools to assist advertisers. The ads inspiration library showcases top-performing ads and identifies creative best practices using AI. An AI copywriter generates Reddit-specific ad copy, while an auto-cropper adjusts images to recommended display ratios. These tools aim to help small businesses create effective Reddit ads more easily by leveraging AI and automation throughout the campaign process.
Mondelēz International collaborates on AI-powered marketing platform. The system will enable faster, personalized content creation using generative AI. Accenture provides the data and insights foundation, while Publicis leads creative asset generation. The platform aims to help Mondelēz brands stay ahead of consumer trends through AI-driven innovation in marketing and consumer engagement.
AI video company Runway launches $5 million fund for AI-powered films. The Hundred Film Fund will support up to 100 projects using Runway's generative video models. Grants range from small amounts to $1 million, plus additional service credits. The initiative aims to jumpstart adoption of AI in filmmaking and uncover innovative uses of the technology in video production of shorts, documentaries, music videos and more.
Filmmaker James Cameron joins board of Stability AI. Cameron sees generative AI as the "next wave" in film technology and visual effects. His role aims to shape the future of visual media using AI. Stability AI, creator of the Stable Diffusion model, views Cameron's involvement as transformative for the company and AI industry overall.
Sam Altman forecasts superintelligent AI within "few thousand days". The OpenAI CEO envisions AI solving major challenges and enabling revolutionary advances. He predicts personalized AI tutors and autonomous assistants becoming commonplace. Altman emphasizes the need for infrastructure development to support widespread AI adoption and maximize its benefits.
👀 Creative picks
Exploring the Intersection of Animation and AI in 'HumAIn'
BluBlu Studios has launched 'HumAIn,' an animated short that merges traditional animation techniques with generative AI tools. Created by Miłosz Kokociński, this film explores how AI can enhance human creativity rather than replace it.
The animation employs Stable Diffusion and img2img methods, while maintaining stylistic consistency through stop-motion effects. The soundscape, crafted by composer Robert Ostiak, integrates AI-generated jazz vocals with electronic music, reflecting the themes of self-awareness and discovery.
Kokociński emphasizes that generative AI is a tool that expands creative possibilities. 'HumAIn' marks a significant step for BluBlu Studios in exploring AI's potential in animation, with plans for future projects celebrating human creativity.
Credits: Miłosz Kokociński (Direction and animation) / Robert Ostiak (Sound & Music) / BluBlu Studios (Production)
🔥 New release
🎉 We’re Thrilled to Announce the Launch of The Mahazine Issue 3! 🎉
After three months of hard work and dedication, we’re excited to share our latest edition with you! This issue dives deep into how AI is transforming creativity, marketing, and more. 👇🏼
What Awaits You Inside?
🔥Global Trends: Stay ahead with insights into the evolving AI creative landscape.
👀Visual Inspirations: Fuel your creativity with captivating imagery.
📣Trending AI News: Keep up with the latest digital campaigns and advancements.
🎒AI Tutorials: Master new techniques and enhance your skills.
⭐And More...: Explore a wealth of resources tailored for creative minds
We truly hope you will enjoy the insights and inspiration we’ve packed into this edition. Your support means the world to us!
Thank you for being a part of our journey. We can’t wait to hear what you think!
Best,
The Mahazine Team
🐲 Tutorial
Video control in Kling
Kling released a video control feature for Image to Video where user can use motion brushes on specific areas to trigger the movement of an object or a person.
→ Try it here
Step 1: Select a frame to animate
→ First, make sure you're on the 'Image to Video' tab, not 'Text to Video'. Then, upload an image. Try animating simple compositions with the motion brush. The subject should be easy to see against the background.
Step 2: Input a prompt
→ Even if you use the control brush to indicate movements, it's helpful to add a prompt describing the desired action. For example, in our case, we'd want a prompt like 'Jack climbing the wooden board.
Step 3: Motion Brush
→ After adding your image and prompt, it's time to use the motion brush. Click the 'Draw Motions' button to start.
→ You'll see an interface where you can manually select your subjects or use 'auto-segmentation' to have the AI do it for you. Color each subject with a different color.
→ After selecting an area, draw a path arrow to show the desired movement direction. You can add up to 6 areas with different paths. To keep an area static, brush over it with the 'Static Area' brush. When you're done, you can save your selection and close the motion brush modal.
Step 4: Generate your video
→ Before generating, choose your preferred mode, video length (5 or 10 seconds), and the number of video outputs. Then, click 'generate' and wait for the render to complete.
→ And, done!
If you are not happy with the result, you can regenerate or edit your prompt and motion brushes.
Do you think he made it onto the board?
See you next week for a new tutorial
🦄 Weekly visuals
Abstract dump
💎 Post of the week
Today we're excited to share updates to Gen-3 Alpha Turbo that bring more control and the ability to generate vertical videos.
Learn more below.
(1/4)
— Runway (@runwayml)
5:59 PM • Sep 27, 2024
One last thing 👇
Did you like this content? Subscribe to get more content like this directly in your inbox.
If you enjoy the mahazine, please spread the word by sharing it with your colleagues, friends, and fellow creatives. Your support in growing our community is invaluable as it allows us to reach more creative minds and continue delivering high-quality content.
Reply