Thursday, March 23, 2023
No Result
View All Result
Get the latest A.I News on A.I. Pulses
  • Home
  • A.I News
  • Computer Vision
  • Machine learning
  • A.I. Startups
  • Robotics
  • Data science
  • Natural Language Processing
  • Home
  • A.I News
  • Computer Vision
  • Machine learning
  • A.I. Startups
  • Robotics
  • Data science
  • Natural Language Processing
No Result
View All Result
Get the latest A.I News on A.I. Pulses
No Result
View All Result

a pretrained visible language mannequin for describing multi-event movies – Google AI Weblog

March 18, 2023
144 6
Home Machine learning
Share on FacebookShare on Twitter


Posted by Antoine Yang, Pupil Researcher, and Arsha Nagrani, Analysis Scientist, Google Analysis, Notion group

Movies have grow to be an more and more essential a part of our every day lives, spanning fields equivalent to leisure, schooling, and communication. Understanding the content material of movies, nonetheless, is a difficult process as movies usually include a number of occasions occurring at totally different time scales. For instance, a video of a musher hitching up canines to a canine sled earlier than all of them race away entails an extended occasion (the canines pulling the sled) and a brief occasion (the canines being hitched to the sled). One approach to spur analysis in video understanding is through the duty of dense video captioning, which consists of temporally localizing and describing all occasions in a minutes-long video. This differs from single picture captioning and commonplace video captioning, which consists of describing brief movies with a single sentence.

Dense video captioning programs have broad functions, equivalent to making movies accessible to individuals with visible or auditory impairments, robotically producing chapters for movies, or bettering the search of video moments in massive databases. Present dense video captioning approaches, nonetheless, have a number of limitations — for instance, they usually include extremely specialised task-specific elements, which make it difficult to combine them into highly effective basis fashions. Moreover, they’re usually educated solely on manually annotated datasets, that are very troublesome to acquire and therefore are usually not a scalable resolution.

On this publish, we introduce “Vid2Seq: Giant-Scale Pretraining of a Visible Language Mannequin for Dense Video Captioning”, to look at CVPR 2023. The Vid2Seq structure augments a language mannequin with particular time tokens, permitting it to seamlessly predict occasion boundaries and textual descriptions in the identical output sequence. As a way to pre-train this unified mannequin, we leverage unlabeled narrated movies by reformulating sentence boundaries of transcribed speech as pseudo-event boundaries, and utilizing the transcribed speech sentences as pseudo-event captions. The ensuing Vid2Seq mannequin pre-trained on tens of millions of narrated movies improves the cutting-edge on a wide range of dense video captioning benchmarks together with YouCook2, ViTT and ActivityNet Captions. Vid2Seq additionally generalizes effectively to the few-shot dense video captioning setting, the video paragraph captioning process, and the usual video captioning process. Lastly, we’ve additionally launched the code for Vid2Seq right here.

Vid2Seq is a visible language mannequin that predicts dense occasion captions along with their temporal grounding in a video by producing a single sequence of tokens.

A visible language mannequin for dense video captioning

Multimodal transformer architectures have improved the cutting-edge on a variety of video duties, equivalent to motion recognition. Nonetheless it’s not simple to adapt such an structure to the complicated process of collectively localizing and captioning occasions in minutes-long movies.

For a normal overview of how we obtain this, we increase a visible language mannequin with particular time tokens (like textual content tokens) that symbolize discretized timestamps within the video, much like Pix2Seq within the spatial area. Given visible inputs, the ensuing Vid2Seq mannequin can each take as enter and generate sequences of textual content and time tokens. First, this permits the Vid2Seq mannequin to know the temporal info of the transcribed speech enter, which is solid as a single sequence of tokens. Second, this enables Vid2Seq to collectively predict dense occasion captions and temporally floor them within the video whereas producing a single sequence of tokens.

The Vid2Seq structure features a visible encoder and a textual content encoder, which encode the video frames and the transcribed speech enter, respectively. The ensuing encodings are then forwarded to a textual content decoder, which autoregressively predicts the output sequence of dense occasion captions along with their temporal localization within the video. The structure is initialized with a robust visible spine and a robust language mannequin.

Vid2Seq mannequin overview: We formulate dense occasion captioning as a sequence-to-sequence drawback, utilizing particular time tokens to permit the mannequin to seamlessly perceive and generate sequences of tokens containing each textual semantic info and temporal localization info grounding every textual content sentence within the video.

Giant-scale pre-training on untrimmed narrated movies

Because of the dense nature of the duty, the handbook assortment of annotations for dense video captioning is especially costly. Therefore we pre-train the Vid2Seq mannequin utilizing unlabeled narrated movies, that are simply obtainable at scale. Particularly, we use the YT-Temporal-1B dataset, which incorporates 18 million narrated movies overlaying a variety of domains.

We use transcribed speech sentences and their corresponding timestamps as supervision, that are solid as a single sequence of tokens. We pre-train Vid2Seq with a generative goal that teaches the decoder to foretell the transcribed speech sequence given visible inputs solely, and a denoising goal that encourages multimodal studying by requiring the mannequin to foretell masked tokens given a loud transcribed speech sequence and visible inputs. Particularly, noise is added to the speech sequence by randomly masking out spans of tokens.

Vid2Seq is pre-trained on unlabeled narrated movies with a generative goal (prime) and a denoising goal (backside).

Outcomes on downstream dense video captioning benchmarks

The ensuing pre-trained Vid2Seq mannequin could be fine-tuned on downstream duties with a easy most probability goal utilizing instructor forcing (i.e., predicting the following token given earlier ground-truth tokens). After fine-tuning, Vid2Seq notably improves the cutting-edge on three commonplace downstream dense video captioning benchmarks (ActivityNet Captions, YouCook2 and ViTT) and two video clip captioning benchmarks (MSR-VTT, MSVD). In our paper we offer further ablation research, qualitative outcomes, in addition to leads to the few-shot settings and within the video paragraph captioning process.

Comparability to state-of-the-art strategies for dense video captioning (left) and for video clip captioning (proper), on the CIDEr metric (greater is best).

Conclusion

We introduce Vid2Seq, a novel visible language mannequin for dense video captioning that merely predicts all occasion boundaries and captions as a single sequence of tokens. Vid2Seq could be successfully pretrained on unlabeled narrated movies at scale, and achieves state-of-the-art outcomes on numerous downstream dense video captioning benchmarks. Be taught extra from the paper and seize the code right here.

Acknowledgements

This analysis was performed by Antoine Yang, Arsha Nagrani, Paul Hongsuck Web optimization, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic and Cordelia Schmid.



Source link

Tags: BlogdescribingGoogleLanguageModelmultieventPretrainedVideosVisual
Next Post

Alternate options to the p-value Criterion for Statistical Significance (with R code) | by Jae Kim | Mar, 2023

Video Highlights: GPT-4 Developer Livestream

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent News

AI vs ARCHITECT – Synthetic Intelligence +

March 23, 2023

Entrepreneurs Use AI to Take Benefit of 3D Rendering

March 23, 2023

KDnuggets Prime Posts for January 2023: SQL and Python Interview Questions for Knowledge Analysts

March 22, 2023

How Is Robotic Micro Success Altering Distribution?

March 23, 2023

AI transparency in follow: a report

March 22, 2023

Most Chance Estimation for Learners (with R code) | by Jae Kim | Mar, 2023

March 22, 2023

Categories

  • A.I News
  • A.I. Startups
  • Computer Vision
  • Data science
  • Machine learning
  • Natural Language Processing
  • Robotics
A.I. Pulses

Get The Latest A.I. News on A.I.Pulses.com.
Machine learning, Computer Vision, A.I. Startups, Robotics News and more.

Categories

  • A.I News
  • A.I. Startups
  • Computer Vision
  • Data science
  • Machine learning
  • Natural Language Processing
  • Robotics
No Result
View All Result

Recent News

  • AI vs ARCHITECT – Synthetic Intelligence +
  • Entrepreneurs Use AI to Take Benefit of 3D Rendering
  • KDnuggets Prime Posts for January 2023: SQL and Python Interview Questions for Knowledge Analysts
  • Home
  • DMCA
  • Disclaimer
  • Cookie Privacy Policy
  • Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2022 A.I. Pulses.
A.I. Pulses is not responsible for the content of external sites.

No Result
View All Result
  • Home
  • A.I News
  • Computer Vision
  • Machine learning
  • A.I. Startups
  • Robotics
  • Data science
  • Natural Language Processing

Copyright © 2022 A.I. Pulses.
A.I. Pulses is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In