NLP PR #1: A Unified Architecture for Natural Language Processing - Deep Neural Networks with Multitask Learning



Title: A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning
Authors: Ronan Collobert & Jason Weston
Venue: International Conference of Machine Learning (ICML)
Year: 2008


Date: 2019-02-02 (Sat)
Time: 14:00 - 16:00
Location: Kadokawa Hongo Building, 9F (角川本郷ビル)
Meetup event: link
Paper Reading lead: @jabalazs


  • RSVP in the Meetup page
  • Read the “Paper Reading Guidelines”.
  • Every participant should prepare 2 questions to ask at the beginning of the session.
  • Interested people can join us for lunch before the event at 12:50. Please let us know in the comments if you’re coming.
  • Have fun and learn!


Thank you for the awesome reading session @zehrah, @asir, @Michael, @Amjad, @edw, @epochx.

Keep tuned for similar readings in the future!


For those interested here are the slides I prepared for the reading.


@edw suggested that it would be a nice idea for the reading leader to prepare less for a given session (i.e., not make a presentation like the one I did), so we can have them more often.


The number of participants (6 + reading leader) was perfect. We had enough people to keep the discussion going, everybody had their questions answered, and everyone had the chance to participate in the discussion. I would keep this number similar in future readings.

I will update this post as I recall more things.


Thanks for taking the time to prepare and organise this discussion Jorge. Personally, I found it very productive.

As you mentioned, my suggestion for not preparing lots of slides beforehand was in the interest of having more meetings but I’m certainly not against it per se, and your slides were definitely very helpful. Perhaps we do actually need some slides to keep the conversation going, I don’t know.

As for suggestions for papers to discuss in future, how about the other ones you mentioned in another Discourse discussion (copy-pasted below)?

I don’t see anything wrong with cribbing other experts’ ideas of what constitutes a good paper as a means to filter them out! Other reasonable filters are papers that won awards at recent conferences or highly cited ones(of course).

Alternatively, if there are PhD students or industry workers in NLP, STT, or any ML topic for that matter who want help understanding a paper they need for their research perhaps they can propose a paper, see if there is sufficient interest in discussing it within the community and going from there?

Just my two bits.


Thank you for your efforts, Jorge! :raised_hands: