Jerry Tworek's Homepage

I'm a research lead at OpenAI, focusing on teaching language models to solve problems within Science, Technology, Engineering, Mathematics and Programming. I care deeply about applying their skills to real-world problems.

Mathematician at heart. Former quant trader, sometimes still passionate about financial markets.

Very cautious AI optimist. If we play our cards right, AI technology may help us solve many of the problems troubling our civilization. It may also help us answer the question what does it actually mean to be a human.

[firstname]@[this domain]
github icon
twitter icon

my face

Some of my work

In reverse chronological order

ChatGPT Plugins Deployment, March 2023
I've lead research on integrating third party plugins and code interpreter with ChatGPT.
ChatGPT Plugins logo
GPT-4 Research, March 2023
My team's research on teaching language models to write computer programs made GPT-4 the strongest model in the world in solving programming challenges.
GPT-4 logo
ChatGPT Deployment, November 2022
My team's research on dialog-based agents that write code made ChatGPT useful for programmers around the world.
ChatGPT logo
I have supervised and advised research into using Large Language Models to fill in the middle.
Efficient Training of Language Models to Fill in the Middle logo
Edit & Insert Deployment, March 2022
My team's research has enabled new ways to interact with language models beyond completing the prompt.
Edit & Insert logo
Some of my work helped team at OpenAI to train great code embedding models.
Text and Code Embeddings by Contrastive Pre-Training logo
Some ideas that we've developed when teaching large language models to program could also be applied to math problems.
Training Verifiers to Solve Math Word Problems logo
OpenAI Codex Deployment, August 2021
I was the primary contributor in researching and training Codex models. We've released those models to the developer community through the OpenAI API.
OpenAI Codex logo
I have started research at OpenAI into training Large Language Models on code and was one of the primary contributors to the paper sharing some of our methodology.
Evaluating Large Language Models Trained on Code logo
GitHub Copilot Deployment, June 2021
I've conducted all stages of research and trained the model that powered GitHub Copilot.
GitHub Copilot logo
I have delivered one of core breakthroughts that has allowed our team teach neural network policy to solve Rubik's Cube with a robot hand using sim2real transfer.
Solving Rubik’s Cube with a robot hand logo