Part 21 of The series where I interview my heroes.

Index and about the series“Interviews with ML Heroes
You can find me on twitter @bhutanisanyam1

Today, we’re talking to a very special “Software Guy, currently digging deep into GANs” — The author of DeOldify: Jason Antic.

Jason is a CS Major and has been working as a Software Engineer for over 12 years. You might be surprised to know that he is still very new to DL, having taken it up seriously about nine months ago.

We’ll be talking about Jason’s experience and the coolest GAN project that’s live on a website: colorize.cc


About the Series:

I have very recently started making some progress with my Self-Taught Machine Learning Journey. But to be honest, it wouldn’t be possible at all without the amazing community online and the great people that have helped me.

In this Series of Blog Posts, I talk with people that have really inspired me and whom I look up to as my role-models.

The motivation behind doing this is that you might see some patterns and hopefully you’d be able to learn from the amazing people that I have had the chance to learn from.


Sanyam Bhutani:​ Hello Jason, Thank you for taking the time to do this.

Jason Antic: My pleasure!

Sanyam Bhutani:​​ You have created one of the most amazing “Deep Learning” Projects, you’ve been working as a Software Engineer for over a decade now.

Can you tell us how did you get interested in Deep Learning at first?

Jason Antic: Well really I’ve been interested in neural networks long before they were cool. They just seemed intuitively to have a lot more potential than other methods. But yeah, they were seriously uncool when you go back 7+ years ago. In fact, I took two artificial intelligence courses in college and neither of them covered neural networks. Years later, AlexNet swept the competition in the 2012 ImageNet challenge. That caught my attention because I knew computer vision basically just didn’t work before then. Comparatively speaking, Deep Learning was magic. I started devouring information on the subject afterwards but it wasn’t until years later that I actually started developing my own models.

Sanyam Bhutani: You’ve had a few failed starts to get started with DL at first, and finally took up fast.ai through till the end-what appealed to you the most about fast.ai?

Jason Antic: They simply have a much better method of teaching than other courses (I rage-quit quite a few popular MOOCs!). They start you with a huge amount of momentum right away — you’re creating an awesome dogs/cats classifier on the first day. THEN you drill down into the details. This is so motivating and more effective in terms of being able to connect the dots of what you’re learning. This is a method of teaching spelled out in a great book called “Making Learning Whole Again.”

Also, what’s being taught literally is cutting edge. Uniquely so. In fact, in V3 part 1, I got to collaborate with Jeremy and Sylvain on lesson 7, and I committed the final notebook used for the GAN super-resolution bit a mere few hours before the course started. It was literally something we invented over the previous two weeks — GAN pretraining. I asked Jeremy if this was normal for him in preparing courses and he confirmed. It’s mind-boggling that it actually works out but the end result is amazing!

Sanyam Bhutani: For the readers with a bit of less-tech background, could you share an ELI5 about what your project is and how it works?

Jason Antic: Sure! I basically created a deep learning model to colorize old black and white photos. While this isn’t the first deep learning model to colorize photos, it does a few new things that make it significantly better than previous efforts. Namely, the output is much more colorful and convincing, and this comes from setting up training of the colorizing model to involve a second model — the “critic” — that basically is there to “criticize” the colorizations and teach the “generator” to produce better images. This design is called a GAN — Generative Adversarial Network.

Because the “critic” model is also a neural network, it can pick up on a lot of the nuances of what makes something look “realistic” that simpler methods just can’t. The key here also is that I as a programmer simply cannot comprehend how to explicitly code something to evaluate “realism” — I just don’t know what all that entails. So that’s what a neural network is here to learn for me!

Sanyam Bhutani: I think you’re the Thomas Edison of GAN(s) (or at least photo colorization using DL)-the idea didn’t work for quite a few weeks and you had quite a bit of unsuccessful experiments (over a 1000).

What made you stick to the project and not give up? How do you think can a software engineer stay motivated and not give in to the “imposter syndrome”?

Jason Antic: Well that comparison to Edison is rather flattering! So, what made me stick to the project and not give up is this somewhat unreasonably optimistic view of mine that there’s a solution to any reasonable problem and that it’s just a matter of effectively navigating the search space to find the answer. Effectively to me, that means doing a lot of experiments and being methodical, and to constantly question your assumptions and memory because that’s typically where problem-solving goes wrong.

That being said, despite my undeniable successes I still to this day fall into that dark mental state of self-doubt, wanting to give up, and “imposter syndrome”. Even earlier this week it started creeping up on me again when I was running into difficulties, and the intrusive thoughts started pouring in again. “You’re deluded, and you were just lucky with DeOldify.” Believe it or not that still happens.

Then I pushed through it and figured it out, and I am very excited about what will be released in the next month as of this writing :)

How do I push through it? The belief that a solution is there and that I’m capable of finding it simply because I’m a normally functioning human being that can problem solve is a big one. That’s the key point here — it’s not so much a matter of intelligence as it is of the method (and that’s learnable!). Another motivating factor is the realization that there is in my mind no better way to spend my time then to try to solve big/cool problems, and that it’s worth the blood, sweat, and tears. Purpose and autonomy are huge motivators.

Sanyam Bhutani: There is a flipside to it as well, how does someone know when to quit a project that might just be too ambitious for the given technology?

Jason Antic: Yes, you definitely have to know when to quit, and that’s quite the art. I say “No” to, and/or quit, a lot of things actually. Why? Because for everything you say “Yes” to, you’re saying “No” to many other things. Time (and sanity) is precious. As a result, especially lately, I’ve said “No” to quite a few opportunities that others find crazy to say “No” to.

So quitting for me is decided first on whether or not the path falls squarely in my realm of values, interests, and maintaining sanity. If you’re talking about an ambitious technological project, then you have to go one step further and evaluate whether or not it’s actually feasible. Often you simply can’t answer that right away, especially if you’re doing an ambitious project! So that’s why you experiment. If you discover a sound reason (and not just a rage-quit rationalization) as to why something won’t work, then quit, learn from it, and move on! But be careful on this point — a lot of problems are solved handily simply by shifting perspective a bit. For this reason, I’ll stick with a problem a bit longer than seems reasonable, because often my perspective is simply wrong. Often the solution comes when I walk away (shower thoughts, basically).

Sanyam Bhutani: It’s very interesting to note that Jason doesn’t have “experience” with photo colorization. Yet he’s done a great job at it, even pushed a “Chromatic optimization” update to the repository allowing DeOldify to run on GPU(s) with smaller memory.

What are your thoughts about “Non-Domain” experts using DL to make progress in general?

Jason Antic: It’s great that you brought up the “Chromatic optimization” because that actually didn’t come from me. That was from somebody on Hacker News the day that DeOldify went viral that had at least some domain knowledge and suggested it as a way to get much better results much more efficiently. So I think this is an important point — domain expertise still counts. And good old-fashioned engineering and design still count. Not everything is going to be solved by your deep learning model — at least as of now.

That being said, I’ve been able to get really far on DeOldify with zero domain expertise, and that idea generally excites me! I think we’ve barely scratched the surface of the implications of this — that we can have a model discover things that not even experts know about yet. The challenge, of course, is figuring out what the models are figuring out and not just treat it as a black box. But I really think that we’re going to see some big breakthroughs in science in the next 10–20 years because of this (and it’s already starting to happen to an extent). Additionally, not requiring domain expertise to be effective means many more minds can be put to the task of solving many of our world’s problems. This is very cool.

Sanyam Bhutani: Your project is built on top of the fast.ai library (v 0.7), can you tell us how it has helped in the development of this project?

Jason Antic: The Fast.ai library is brilliant! It encodes a lot of best practices (such as learning rate finder) which make your life much easier as a deep learning developer. It’s painful and silly to constantly have to operate at a low level and reinvent things (poorly) with things like Tensorflow or PyTorch. Fast.ai’s library does a lot of this for you, which means you can spend more time doing productive things. This is how progress in software generally happens, and I feel Fast.ai is leading the way. Do note too: I’m about to push a Fast.ai V1 upgrade, and the benefits of doing this were huge — speed, memory usage, code simplification, and model quality all benefited from this largely out of the box.

Sanyam Bhutani: I’ve tried running DeOldify and it really really blew me away. Were there any images or scenarios that even made you go WOW?

Jason Antic: It might sound silly but the cup in the image below was my first wow moment. This was one of the first images I rendered after I finally experimented with a self-attention GAN based implementation, after failing to get this stuff to work with a Wasserstein GAN. Up to this point, I didn’t get anywhere close to this kind of detail, or interesting and seemingly appropriate colorfulness in my renders. I do acknowledge the flaws in this image (the zombie arm, for example). But I knew I was on to something after seeing this.

Sanyam Bhutani: DeOldify works and works really well, what’s next for the project?

You’ve mentioned in your repository and twitter that you want to make a lot of under the hood improvements. Why do you think those are a priority over building something that will be potentially even cooler than this?

Jason Antic: I have no doubt that there’s going to be a model in the next year or two that’s going to blow my model away in terms of sophistication. That’s just the nature of progress! But what I’m really interested in is making this stuff practically useful.

When I first released DeOldify, I was able to create some truly impressive images, but at a cost — I had to search for quite a while just to find an image that didn’t have some sort of awful glitch or discoloration. GAN training is a double-edged sword currently in this sense — you get really great results, but stability is really difficult to achieve. This is the sort of thing I’m wrapping up on addressing now. This will make the difference between having a cool tech demo and actually being able to use the tech in the real world in a truly automated way. It’ll also enable something else that’s way cool, but I can’t talk about that yet. ;)

The issue that DeOldify is currently a supreme memory hog is another thing I’m attacking. Quite successfully I’d add! That also will enable practical usability.

Once the items above are addressed and announced (very soon!), I’ll be looking to finally try making some money on this stuff. As you can imagine, countless friends and relatives have been asking “are you monetizing this?” I’ll be able to finally allay their concerns about giving everything away for free LOL. And who knows what all that will involve. Just no VC money. Definitely not that.

Sanyam Bhutani: For the readers wondering about Jason’s experience in Deep Learning, he had picked this project right after completing fast.ai Part 1 and 2. What pointers do you have for the enthusiasts that would like to build something as cool as DeOldify?

Jason Antic: Well, to be honest, DeOldify’s origin story was that I had a bit of a shower thought while taking a long walk (a walk thought, I guess), where I was like “Ohhh…GANs learn the loss function for you!” That’s not actually an original thought but you don’t hear it much and I certainly hadn’t heard it at that point. Then I just paired this up with a list of projects I had compiled over time (again “walk thoughts”) and figured: “Let’s try this with colorization because I have no clue what the loss function should actually be. So perhaps the GAN can learn it for me!”

That was the intuition and I was unreasonably sure it was right. That gave me the stupid energy to spend the next six weeks doing probably upwards of 1000+ experiments (failure after failure) until I eventually stumbled upon self-attention GANs and it finally worked. And I have to emphasize that point — it was a lot of failures, and a lot of unknowns, and a lot of not giving up even though I was taking a bit of a psychological beating after a while. You know — self-doubt and all that jazz.

Hence my advice is this: Find something you’re interested in enough to pursue it in a manic way and guide your efforts by at least somewhat rigorous experimentation. And stay the course until you have the actual reason (evidence) to believe that what you’re pursuing is impossible as opposed to just unknown. I think this is where most people shoot themselves in the foot — they give up way too easily!

Sanyam Bhutani: Are there any upcoming updates in DeOldify that we should be really excited about?

Jason Antic: There’s a fast.ai v1 update coming very soon, and along with that comes the many benefits of the said upgrade! The model is going to be much faster, much smaller, higher quality, zero artifacts, and nearly zero “artistic selection” needed, and a large part of this comes simply from taking advantage of what is available in the new fast.ai code.

There’s other stuff you guys should be excited about that I just can’t talk about yet. You’ll hear about it soon or soonish. I’m such a tease :P

Sanyam Bhutani: How can someone best contribute to DeOldify?

Are there any long-term goals that you’d like to work on with the project?

Jason Antic: I’ve been getting awesome key contributions from people in so many forms. For example, there are those who simply want something that I haven’t had time to produce yet. The Colab notebook is a great example of this. I loved that one because it made DeOldify way more accessible to a lot of people.

Another interesting key contribution was somebody on Hacker News telling me about the “Chromatic optimization” idea they had which turned out to be the enabling factor in doing unlimited resolution image colorization. It wasn’t code, but it was so important and made DeOldify way better.

Then there are people doing awesome renders with them, sharing them, and even giving me feedback on the problems they have and the wishlist of updates they’d like. That’s a great contribution too!

Contributions can come in many forms. I welcome them wholeheartedly. They make this “job” of mine so enjoyable and meaningful.

Sanyam Bhutani: Given the explosive growth rate of ML, How do you stay updated with the recent developments?

Jason Antic: Generally speaking, reading everything and knowing everything is simply not an option today. There’s no such thing as the “person who knows everything” in the world anymore. Thomas Young apparently was the last person who had this title, and he lived two hundred years ago.

Even within just the field of machine learning, there are so many papers coming out and new information that you can’t possibly keep up. So what do you do? You filter intelligently. You choose good resources that do the hard work of distilling what’s actually important and presenting it in a much more useful manner.

Fast.ai is an excellent example of this, and in fact, Jeremy told me that they like to think of their work as “refactoring Deep Learning”. We need that desperately, as there is a lot of noise to be separated from the actual signal at this point.

It’s also much more efficient to let others in the community figure out for you what’s great and then taking a look yourself. So follow some of your ML heroes on Twitter and see what they say! They’re tweeting about great papers and new developments all the time.

Sanyam Bhutani: What are your thoughts about Machine Learning as a field, do you think its overhyped?

Jason Antic: Speaking specifically to deep learning, I think Jeremy Howard said it best: “Deep Learning is Overhyped…is Overhyped.” Simply put, I’m strongly of the opinion that even if you just look at the capabilities of what’s already available today, that we’ve far underutilized the potential uses and problems solved with deep learning. And I think that’s chiefly because there’s probably not enough engineers and domain experts running with the tech yet to make this stuff work in the real world. I think people who say it’s overhyped either just like to be contrarian or lack imagination.

That being said, there are some seriously awful startups popping up here and there that make outrageous claims yet get still convinced somebody to fork over money. It’s reminiscent of the dotcom boom in a way. But that’s to be expected — there’s always going to be swindlers and they’re always looking for new ways to take your money. AI and blockchain just happen to be the latest sexy things to achieve that. Keep in mind — though there was a dotcom bust, the internet still really revolutionized the world. The same will happen with deep learning — that much I’m convinced of.

Sanyam Bhutani: Before we conclude, any tips for the beginners who aspire to go out and build amazing projects using DL but feel completely overwhelmed to even start?

Jason Antic: Go directly to fast.ai, and do the work! Great results take time and effort, and there’s no good substitute for that.

Sanyam Bhutani: Thank you so much for doing this interview.


You can find me on twitter @bhutanisanyam1
Subscribe to my Newsletter for updates on my new posts and interviews with My Machine Learning heroes and Chai Time Data Science