40% off TNW Conference!

I strongly recommend reading the entire 48-page document and visit many of its references.

But here are some key takeaways.

How deep learning can improve how we conduct scientific research

For many problems, simplermachine learning algorithmsoften provide more efficient solutions.

Neural networks usually (but not always) need lots of data.

They are also difficult to interpret.

Covid-net coronavirus detection algorithm

Aside from thecommercial and industrial applications, CNNs have found their way into many scientific domains.

One of the best-known applications of convolutional neural networks is medical imaging analysis.

Recently, scientists have been using CNNs tofind symptoms of the novel coronavirusin chest x-rays.

random vectors

Some of the visual applications of deep learning are less known.

For instance, neuroscientist are experimenting with pose-detection neural networks totrack the movements of animalsand analyze their behavior.

To be clear, the current AI algorithms process language infundamentally differentand inferiorwaysthan the human brain.

RISE explainable AI example saliency map

Transformers have proven to be especially efficient in scientific research.

What if you dont have a lot of data?

One of themain criticisms against deep learningis its need for vast amounts of training data.

In many fields of science, theres not enough labeled data available.

But not every deep learning model requires lots of training data.

Transfer learning involves finetuning a pre-trained AI model for a new task.

Typically, performing transfer learning is an excellent way to start work on a new problem of interest.

Self-supervised learning is still in a very preliminary stage, however, and also an active area of research.

But an area that has already yielded results is generative models such asgenerative adversarial networks(GAN).

GANs can generate fake data that resemble their real counterparts.

Perhaps theyre best known for the natural-but-nonexistent human faces they can create.

Artists are nowusing GANs to generate artthat can sell at stellar prices.

Ina recent project, AI researchers trained a GAN to generate functional protein sequences.

Generative AI and reinforcement learning come with some caveats, however.

Scientific research and deep learnings interpretability issues

Another challenge that deep learning often presents is interpretability.

Fortunately, advances inexplainable artificial intelligencehave helped, to some degree, overcome these barriers.

Schmidt and Raghu AI interpretability techniques into two broad categories: Feature attribution and model inspection.

Feature attribution helps us better understand which features in a specific sample have contributed to a neural networks output.

These techniques produce saliency maps that highlight these features.

There are different techniques that produce saliency maps, including GradCAM, LIME, andRISE.

These techniques provide better insights into the general workings of the AI model.

Also tagged with