Runway brings AI-Powered Video Editing to iOS with mobile app launch
RunwayLM has launched its first mobile app on iOS, bringing their video-to-video generative AI model - Gen-1 - to users' phones (and soon text-to-video). The app allows users to transform existing videos by applying an aesthetic or theme similar to a style transfer tool. With the Gen-1 model, users can apply preset filters or create entirely new videos based on a prompt, making the creative possibilities endless. However, the output generated by the Gen-1 model is often strange due to its generative nature, but still fun to use.
The app's video processing is done in the cloud, and the processing time for each video creation takes around two to three minutes. It currently has limitations, such as working with footage no longer than five seconds and not generating nudity. Despite these limitations, Runway CEO, Cristóbal Valenzuela, believes that making these tools mobile is important and has compared the current era of generative AI to the "optical toys" phase of the 19th century. Furthermore, the app's ease of use and creative potential make it an exciting tool to experiment with.
Runway's mobile app offers a more fluid experience compared to their web-based software suite, which requires the distance between capturing footage and generating it to be wider. The app supports Runway's Gen-1 model but will soon add the purely generative Gen-2 model. The video processing speed may increase over time as Runway develops more efficient algorithms, and the exciting part is that Runway's mobile app is just one example of the growing potential of generative AI and its possible impact on the future of content creation.