
Konten Dewasa
Konten yang Anda coba tampilkan berisi gambar seksual dan tidak pantas. Anda harus memverifikasi bahwa Anda berusia di atas 18 tahun untuk dapat menontonnya.
Ricky T. Q. Chen@RickyTQChen
Research Scientist at FAIR NY, Meta. I build simplified abstractions of the world through the lens of dynamics and flows. rtqichen.com Joined May 2013-
Tweets208
-
Followers3,798
-
Following806
View a Private Twitter Instagram Account

Michael Galkin @ NeurIPS @michael_galkin
2 weeks agoI will be at #NeurIPS2023 in New Orleans and bring a new batch of 5 swag t-shirts (as per @mmbronstein request) - now in M/L sizes! Attending generative modeling workshops in one of those would be fun 🫠

Phillip Isola @phillip_isola
a week agoThis paper is so cool: dangeng.github.io/visual_anagram… It shows several kinds of illusions that I had never seen before (e.g., color inversion illusion). It's exciting to see more and more cases like this, where AI opens up new kinds of art, rather than only imitating old forms.

Justine Moore @venturetwins
2 weeks agoObsessed with the new “make it more” trend on ChatGPT.
You generate an image of something, and then keep asking for it to be MORE.
For example - spicy ramen getting progressively spicier 🔥 (from u/dulipat) twitter.com/i/web/status/1…

Rodrigo A. Vargas-Hdz @RoVargasHdz
2 weeks agoFresh out of @arxiv and accepted at @ML4PhyS #NeurIPS2023
Exciting work by @Alegendree & collab. @RickyTQChen We propose continuous Normalizing Flows as an ansatz for the electron density in the orbital-free DFT problem. Our approach is Lagrange-free😎
arxiv.org/abs/2311.13518

AI at Meta @AIatMeta
3 weeks agoToday we’re sharing two new advances in our generative AI research: Emu Video & Emu Edit.
Details ➡️ bit.ly/47THS1l
These new models deliver exciting results in high quality, diffusion-based text-to-video generation & controlled image editing w/ text instructions.
🧵

Simone Scardapane @s_scardapane
a month ago*Flow Matching for Generative Modeling*
by @lipmanya @RickyTQChen @helibenhamu @mnick
@lematt1991
Provides a scalable algorithm for training continuous normalizing flows. CNFs have a large design space, with standard diffusion processes as a limiting case.

Kyunghyun Cho @kchonyc
a month agoa number of weird definitions and weirdly specific points, but overall, worth reading it to see which areas are considered as priorities by WH. in this 🧵, let me copy-paste those few weird/interesting/specific points i found reading it. whitehouse.gov/briefing-room/…

Kilian Fatras @FatrasKilian
2 months agoA lot of research can be done with Flow Matching! Riemannian Flow Matching is by far one of my favourite papers this year and has opened the door to many biological applications. Also, working on the torchCFM library has been fun 😁 I am excited for 2024! twitter.com/michael_galkin…

Ashwini Pokle @ashwini1024
2 months ago📢Excited to share my summer internship work on “Training-free linear image inversion via flows”.
Paper: arxiv.org/pdf/2310.04432…
TL;DR: We propose a training-free method for linear image inversion by leveraging pretrained flow models trained via flow matching.
Thread 🧵

Ricky T. Q. Chen @RickyTQChen
2 months agoWe propose an efficient algorithm that optimizes diffusion models based on arbitrary task-specific / creative / user-specified optimality conditions. Going into RL / control territory, while making use of the matching frameworks designed originally for generative modeling. twitter.com/guanhorng_liu/…

Ján Drgoňa @jan_drgona
3 months agoNeural Ordinary Differential Equations (NODE) made easy in Neuromancer.
NODEs are black-box continuous models suitable for system identification from time series data. See how to train NODE in Neuromancer using this Colab example:
colab.research.google.com/github/pnnl/ne…

An Thái Lê @ NeurIPS 2023 @an_thai_le
3 months agoI am very happy to announce that our work: "Accelerating Motion Planning via Optimal Transport" has been accepted to #NeurIPS2023!
Project website: sites.google.com/view/sinkhorn-…

Ricky T. Q. Chen @RickyTQChen
4 months agoSuddenly, code. github.com/facebookresear… twitter.com/RickyTQChen/st…

Dishank Bansal @theshank9
4 months agoCome checkout our poster at Differentiable Almost Everything Workshop @icmlconf .

Ricky T. Q. Chen @RickyTQChen
5 months agoWe train diffusion/flow models that are sample efficient & consistent by design, significantly reducing sampling costs for free! #ICML2023
Multisample Flow Matching can circumvent the bias in minibatch OT maps, enabling their use in generative modeling.
arxiv.org/abs/2304.14772

Ricky T. Q. Chen @RickyTQChen
5 months agoWe train diffusion/flow models that are sample efficient & consistent by design, significantly reducing sampling costs for free! #ICML2023
Multisample Flow Matching can circumvent the bias in minibatch OT maps, enabling their use in generative modeling.
arxiv.org/abs/2304.14772

Joelle Pineau @jpineau1
6 months agoNew results from the FAIR team on generative speech across multiple tasks. This model is based on Flow Matching techniques. Worth looking at the samples in the blog post! twitter.com/AIatMeta/statu…

Ricky T. Q. Chen @RickyTQChen
6 months agoA generative speech model from Meta AI powered by *Flow Matching* 🥳 !! Amazing work by the team. twitter.com/AIatMeta/statu…

Yaron Lipman @lipmanya
6 months ago📣 A new #ICML2023 paper investigates the Kinetic Energy of Gaussian Probability Paths which are key in training diffusion/flow models. A surprising takeaway: In high dimensions *linear* paths (Cond-OT) are Kinetic Optimal!
Led by @shaulneta w/ @RickyTQChen @lematt1991 @mnick

Ricky T. Q. Chen @RickyTQChen
7 months ago@TDaulbaev @lipmanya Not yet, we’re aiming to release in around a month.

Tivadar Danka @TivadarDanka
8 months agoI described some of the most beautiful and famous mathematical theorems to Midjourney.
Here is how it imagined them:
1. "The set of real numbers is uncountably infinite."

Philipp Hennig @PhilippHennig5
8 months agoOpening the #ProbNum School last Monday allowed me to argue my take on the Generative Revolution:
As an AI/ML student, no matter how you feel about GPT et al., there’s never been a better time to focus on, wait for it,
ALGORITHMS!
Sure, I’d say that. Here’s why, though:

Dinghuai Zhang 张鼎怀 @ NeurIPS2023 @zdhnarsil
9 months agoOur #ICML2023 workshop proposal "Structured Probabilistic Inference & Generative Modeling" has been accepted 🎉. We can't wait to engage in insightful discussions with experts in probabilistic ML and other areas at the beautiful Hawaii 🌴🏖️. Check🔍: spigmworkshop.github.io

Ricky T. Q. Chen @RickyTQChen
9 months ago@cisprague Hmm, not too sure. There could be some interesting connections. We have a work in progress that shows, roughly, as dimension gets larger, the models induced by geodesic paths become closer to optimal transport solutions (between the two distributions at t=0 and t=1).

Omer Bar Tal @omerbartal
10 months agoExcited to share "MultiDiffusion"!
A controlled image generation framework w/ pre-trained text-to-image diffusion model.
* Spatial guidance controls (bounding boxes/masks)
* Arbitrary aspect ratios (huge Panoramas!)
NO training NO finetuning.
[1/3]@YarivLior @lipmanya @talidekel

Joan Serrà @serrjoa
10 months agoNo more diffusion guys, now we do flow matching. twitter.com/lipmanya/statu…

Ricky T. Q. Chen @RickyTQChen
10 months ago@daibond_alpha The main reason I wanted to mentioned this here is that diffusion models still need to simulate an (albeit simple) SDE during training on even on simple manifolds, IMO an inconvenience compared to training diffusion models on Euclidean spaces.

Ricky T. Q. Chen @RickyTQChen
10 months ago@daibond_alpha Yes, to sample and evaluate log-likelihoods after training, simulation of the model is still required. (Riemannian) Flow Matching is a training procedure for CNFs. Apologies for not being clear!