Experts' Opinions on NLP


#1

Recently, Sebastian Ruder tweeted about a document collecting the opinions from several experts on NLP, answering these four questions:

  1. What do you think are the three biggest open problems in NLP at the moment?
  2. What would you say is the most influential work in NLP in the last decade, if you had to pick just one?
  3. What, if anything, has led the field in the wrong direction?
  4. What advice would you give a postgraduate student in NLP starting their project now?

I made a more readable document, and shared it here.

What I gather from the experts’ responses is the following:

  1. What do you think are the three biggest open problems in NLP at the moment?

    • Current models don’t understand language as humans do.
    • Generalization (related to domain adaptation, transfer learning, few-shot learning, zero-shot learning).
    • Evaluation. Current evaluation approaches are often misused and misinterpreted.
    • Reliance on biased, giant datasets for models to “learn” anything.
    • Related to point 2: Adapt current models / data to low-resource settings (languages with few data).
    • Instrospection / interpretability.
  2. What would you say is the most influential work in NLP in the last decade, if you had to pick just one?

  3. What, if anything, has led the field in the wrong direction?

    • Distance between computational methods and linguistics.
    • Learning how to solve datasets instead of understanding and trying to solve the big picture.
    • Blindly trying to beat suboptimal benchmarks or the state of the art in specific datasets. This leads to architecture hacking and “graduate student descent” (also see this tweet).
  4. What advice would you give a postgraduate student in NLP starting their project now?

    There are lots of good advice and I would definitely recommend reading them all. Some personal favorites:

    • George Dahl: Learn how to tune your models, learn how to make strong baselines, and learn how
      to build baselines that test particular hypotheses. Don’t take any single paper too
      seriously, wait for its conclusions to show up more than once. In many cases, you
      can take even pretty solid published work and make much stronger baselines that
      give some more context on the results.

    • Karen Livescu: Collaborate a lot. Do internships. Find multiple mentors and learn to weight their
      advice. Take courses on fundamental methods, even if they don’t seem relevant right
      now. Learn the history of the field.

    • Kyunghyun Cho: I believe scientific pursuit is meant to be full of failures. 99 out of 100 ideas you come up with are supposed to fail. If every idea works out, it’s either a) you’re not ambitious enough, b) you’re subconsciously cheating yourself, or c) you’re a genius, the last of which I heard happens only once every century or so. So, don’t despair!


Share your own thoughts and opinions! :smiley:


#2

I skimmed through the doc a couple of days ago and thought it’s a great idea and I learned a couple of things and agreed with a lot of things. The doc is really not very readable, you’re right, your version is really nice! Maybe you could also share on Twitter and link Sebastian Ruder (this was initiated by him and others)? (I can also share with the MLT account if you want and link you both!)


#3

Yeah! it would be awesome if you shared it with the MLT account :slight_smile:. Of course all credit should go to Sebastian and his group. I just did some editing :sweat_smile:.


#4

Will do! The only thing that I would add to the doc would be
“Organized by Herman Kamper, Sebastian Ruder, and Stephan Gouws at the Deep Learning Indaba 2018
You can find the slides of the session [here (https://drive.google.com/file/d/15ehMIJ7wY9A7RSmyJPNmrBMuC7se0PMP/view).” like in the original doc.


#5

It’s already on the first page, after the table of contents :smiley: Do you think it would be better to put it on the cover?


#6

oh I didn’t see that. which means probably yes, better to put it on the cover, what do you think? ^^


#7

Just updated it! the original text now appears on the front page, and my extra added stuff in a footnote.


#8

Thanks so much, looks great! I tweeted it! :blush: