Replit CEO Amjad Masad on bringing the next 1 billion software creators online
Freethink spoke with Masad about the future of software development, the outsized power of Silicon Valley, and the absurdity of the AI extinction theory.
Imagine you had zero coding skills but wanted to build a very simple app from scratch. How long would it take? A decade ago, it might have taken weeks or months. Today, it’s an hour or less.
That’s how long it took me to create my own video game on Replit, the collaborative coding platform where AI helps you write, edit, and launch code within your browser.
I told Replit’s AI what I wanted in plain English: a basic rip-off of Pac-Man. Out came a draft of the code. The game didn’t work properly at first, but after a bit of troubleshooting and tweaking, I hit “Run” and then started pressing my arrow keys to hustle my character away from the enemies and their “boss,” which I’d programmed to be bigger and faster than the others.
Replit didn’t give me the delusion that I’d be joining Valve as a game developer anytime soon. But it did make me think that I could learn to build functioning software in a fraction of the time it would have taken just years ago.
Replit wants to inspire that same idea in more people — a lot more. The startup, cofounded by CEO Amjad Masad, aims to bring the next 1 billion software creators online, a sky-high target considering estimates suggest less than 1% of people worldwide know how to code. Replit wants to change that by making software development as accessible and intuitive as possible, aiming to dominate the online coding world and ultimately use AI to provide “a fully autonomous pair programmer — one that feels like working with another teammate.”
The basic idea came to Masad in his home country of Jordan, where he had been tinkering with computers and software since he was five. When Google Docs was released in 2006, the teenage Masad had a vision of a similar collaborative platform for code.
“I built the world’s first online programming sandbox with a lot of different languages, and that went viral,” Masad said in a 2023 TED Talk. “A bunch of companies in Silicon Valley started using it, and they hired me. I got a visa to the United States and I came to New York.”
While working at Codecademy, Yahoo!, and Facebook, Masad maintained the software as an open-source side project. Its fast-growing user base convinced him to found Replit the company in 2016, with his wife, Haya Odeh, and his brother, Faris Masad.
After several rejections from Y Combinator, Silicon Valley’s premier startup accelerator, their break came when co-founder Paul Graham read about Replit on Hacker News and encouraged them to reapply. Graham talked to YC president Sam Altman, and they were accepted.
By 2018, Replit had pulled in 1 million users and significant investments from a16z, Bloomberg Beta, and Reach Capital. In 2022, the company debuted Ghostwriter, the AI coding assistant that let me build a video game in one hour. The company now boasts over 20 million users and a more than $1 billion valuation.
I recently spoke with Masad about coding, the future of software creation, and Replit’s quest to bring the next billion creators online. In a wide-ranging conversation, we also touch on whether AI is going to exterminate humans (“I just think it’s dumb”), the culture of hardcore engineering (“a little toxic”), why English majors sometimes make the best coders, things Silicon Valley shouldn’t be in charge of (“what is true and what is not true”), and why AI doomers may be fueling regulatory capture.
A new era of software development
Large language models — like the kind powering ChatGPT and Ghostwriter — can now convert plain English into lines of code, giving software developers a rough draft so they don’t have to start with a blank page. Masad has likened this evolution to going “from writing letters to the computer” to “having a conversation with the computer.”
The goal is to remove as much friction as possible from the software creation process — to have AI that can function not merely as an assistant but as a competent colleague, one that someday could handle the bulk of the work.
But that doesn’t quite mean AI tools like Replit’s Ghostwriter, Github’s Copilot, or Cognition Lab’s Devin will soon be able to crank out a perfect program immediately after you request one, the way ChatGPT might when you tell it to generate an essay on Shakespeare.
“We’re not going to get to a point, you know, anytime soon where it’ll just build the full thing perfectly,” Masad said. “But [Replit will] generate the code for you, we’ll generate the environment for you, [and] give you tips on how to improve it. You’ll have an AI that you can continue talking to and learning from.”
So, what advances might we see sometime soon? Masad expects AI to become more contextual, changing how it responds to input based on the context in which it’s given.
“Maybe you start with a pure natural language interface, but the UI sort of unfolds as you’re talking to it. And then every part of the UI can be either chattable or explainable, or there’s, like, contextual actions that you could take.”
“For example, you’ll hover over a component in Figma, and instead of doing manual transformations, there’s a prompt, and you can just say something there and the right thing will happen. Or [you might be] hovering over an error in Replit, and then the AI [is] kind of jumping in and telling you, ‘Hey, like, I can fix that. Here’s what happened here.’”
Asynchronous AI will be an interesting development, Masad said: “Being able to send off an agent to do some task while you’re doing some other task, and then the agent coming back and saying, ‘Here’s what I got done.’”
“And finally, multimodal is going to be very interesting. Being able to have a shared whiteboard with AI is fascinating. Being able to just start the camera and […] show the AI things, and being able to talk as you create — those are going to be really interesting. […] I think in the future we’re going to be typing, using the mouse, and talking and perhaps gesturing at the same time.”
The complexity of coding
All engineering comes with a certain level of complexity, whether you’re building software or a bridge. Not all of it is necessary. In 1975, computer scientist Fred Brooks published The Mythical Man-Month: Essays on Software Engineering, which describes two types of complexity: accidental and essential.
Essential complexity must be dealt with to solve the core problem. Masad offered the example of designing a workflow for a financial institution that provides loans. With a wide range of factors to consider — credit scores, income verification, interest rates, collateral valuation, term lengths, regulatory compliance — the problem is “inherently complex, and you need really talented programmers who are able to kind of figure out that workflow.”
But those programmers might get slowed down by the accidental complexity of, say, a Python environment that’s needlessly confusing and inefficient. Where does accidental complexity come from? One source is backward compatibility, when new tools must be compatible with the old tools and all their shortcomings — or, as Masad puts it, “bad decisions that you’re stuck with” that lead to “the web carrying forward so much junk.”
Masad points to another source of accidental complexity: Software engineers tend to look down on simple ways of doing things. Why? “Some motivations tend to be unhealthy, like job security,” Masad said. “Other motivations tend to be like, ‘Oh, we’re hardcore engineers. We do the hard thing.’”
“The culture of programming tends to be, like, a little toxic because of that,” Masad said. “And novices are turned off because the tools are difficult.”
But culture and accidental complexity aren’t the only hurdles to bringing more people into software development — coding is mentally taxing. Masad notes that programming requires a linear thinking style — something everyone can learn, though some are naturally better at it than others.
“Humans think in non-linear ways. We branch out a lot in our minds, even. We’re having conversations. We’re going around circles and we refer back to something else. And programming is a linear thing. You have to linearize your thoughts. And you can create branches [in programming], but even in those branches, it has to be linear.”
The next wave of software creators
As technology changes, so too are the thinking styles and occupations that might be well-positioned to take advantage of new software development tools. Masad offered a couple of examples.
“I think designers and product managers are more empowered than ever because they don’t really need to wait on the engineer to be able to prototype something and explore it,” he said.
“They also […] don’t need to take shit from the engineers, right? You know, engineers will say ‘no’ to a lot of things, but they can go try it themselves. And we see that a lot with Replit. We see a lot of designers take the designs and try to make it as close to reality as possible, and then give the engineers code.”
One benefit of building this bridge (or, at least, stepping stone) between developers and non-developers is that it allows for greater diversity and collaboration. “Diversity” has become something of a buzzword, Masad said, but he sees clear benefits from having people with different perspectives and backgrounds on his teams.
“One of our best engineers […] was an English major who was then a photographer. And, like, I look at his code — it’s art. His creations are art. He’s a really good designer as well. And he brings something to his team that all the [computer science] graduates don’t have.”
The “Great Equalizer”
To Masad, the ongoing evolution in software creation isn’t only about making work easier for today’s developers and companies. It’s mainly a way to introduce a new wave of software creators to the economic opportunities of the internet, which Masad views as the “Great Equalizer” — a vision he formed as a kid in Jordan where he learned he could earn money armed with only a computer and a good idea.
“The internet is one of the biggest wealth creation engines ever known to human beings,” Masad told Freethink. “It is the one thing that we can use to lower inequality globally [and] give people more access to wealth creation.”
Today, roughly one-third of the global population lacks internet access, and Masad says computers need to become much cheaper and education more accessible before the internet truly becomes the “great equalizer of opportunity.”
But those with access to today’s digital tools arguably have unprecedented opportunities. Masad says that one thing that’s under-discussed about the AI revolution is generative AI’s impact on entrepreneurship — “especially small-business entrepreneurship,” Masad says.
“You don’t have money for a designer? Use AI to generate images. You don’t have money for a programmer? You know, go use Replit and learn a little bit of coding and do it yourself. And so it gives you all the tools to start, like, an online business. […] I think over the next few years that’s going to be a really big boom.”
The ability of these small teams — what Marc Andreessen and Ben Horowitz recently dubbed “Little Tech” — to quickly build products will help reinvigorate and decentralize American business, Masad believes.
Silicon Valley and computer literacy
Silicon Valley tends to have outsized influence over society, Masad says. He drew a parallel back to the 15th century before the invention of the printing press, when only a literate elite could read, write, and create books.
“I think there’s something like that happening right now where [there’s a] ‘priesthood,’ and they control the way of using software and creating software. And there’s a lot of cultural issues with that.”
Like the printing press and literacy helped wrest power from the Established Church and empower the general public, Masad says a similar shift could happen if more people gained computer literacy.
“Although I love Silicon Valley, there are a lot of things that I think we should not be deciding, such as, like, what is true and what is not true. […] I think when people have basic literacy of how computers work — how programming works, how logic works, how code works — they will at least be aware when they’re being manipulated by algorithms.”
“They’ll be able to have more freedom with their apps. Like, you’re using a SaaS [software as a service] app. You can pull up the API and, like, bang out a few prompts and then be able to move your data around. I mean, that’s incredibly powerful.”
When it comes to AI regulation, Masad outlined two main problems: an unclear regulatory environment and overblown fears about AI.
“Copyright is one area [the government] can provide a lot of clarity,” he says.
“With regards to AI as some kind of weapon of mass destruction: People make the analogy to [nuclear weapons] — I’ve never understood that. No one has actually made the real case for that. No one could tell you how a neural network running on a computer could suddenly kill all humans. […] So I just think it’s dumb and I think it’s scaring politicians. It just seems like some of the people using these arguments are doing it to encourage regulatory capture and overregulation so that they can win.”
There are real problems posed by using AI in warfare, Masad says, referencing a report from +972 Magazine that described how Israel had used AI to help select bombing targets in Gaza.
“It’d be great if there’s some regulation about AI weapons,” Masad says, via international law. “Like, ‘Do not kill a lot of civilians’ is a good idea. But no one’s talking about that because everyone’s talking about the extinction thing. It’s just a huge distraction. I think it’s bullshit. And I think people should stop talking about it.”
In addition to AI-based weapons, Masad says there are valid concerns about using AI around critical infrastructure.
“AI tends to be this probabilistic machine that is imprecise, that hallucinates, that makes the wrong decisions often, right? Even when it’s fairly high reliability — even like 99% reliability — that 1% of unreliability might be catastrophic when it’s applied to things like infrastructure: power plants, water, nuclear. You don’t want AI anywhere near a nuclear power plant. […] I only think neural networks in their current shape should be able to touch them maybe when they’re […] layered with a more deterministic aspect.”
Masad also noted the difference between regulating the application of AI models versus the models themselves.
“You shouldn’t regulate the foundation models, but when you go and try to apply these foundation models on critical human civilizational components, you should really be clear about that.”
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].