Tell us about yourself!


#1

Hello everyone! Use this thread to introduce yourself to the community!

  • Tell us a little bit about yourself
  • Why you’re interested in Machine Learning/Deep Learning?
  • What are you working on now?
  • What you hope to get out of the community?
  • Anything else you’d like to share

#2
  • Tell us a little bit about yourself

I’m Francisco, born and raised in Brazil, have been in Japan since 2007 when I came for my Masters and have stayed since.

  • Why you’re interested in Machine Learning/Deep Learning?

I believe Machine Learning knowledge, in a few years, won’t be a remarkable part of your CV but more like a basic prerequisite for pretty much any job in the tech industry.

  • What are you working on now?

My day job isn’t really related to ML/DL but lately I’ve playing around with music using audio spectrograms.
As soon as I have something interesting to show I’ll share it with you guys :slight_smile:

  • What you hope to get out of the community?

Learning new things, a place to work on my projects, discuss about ML/DL with like-minded people

  • Anything else you’d like to share

I’ve joined the community a while ago and now I’ve been helping with event Live Streaming and managing Discourse. If you have any questions or feedback related to this forum feel free to contact me :slight_smile:


#3
  • Tell us a little bit about yourself

I’m Tian-Jian Jiang from Taiwan, and usually go by Mike. I came to Tokyo in 2012 for a machine/crowd translation startup (not Gengo). Two years ago, I switched to an industrial lab. Starting from month, I’m an independent researcher (read: unemployed).

  • Why you’re interested in Machine Learning/Deep Learning?

My research interests lie primarily in computational morpho-(phonemic|syntactic|semantic) analysis, especially for comparing Sino-xenic related languages (CJKV) with English. The truth is, however, the only thing I’ve been doing for a decade is feature engineering, so it’s about time to dig “deeper.”

  • What are you working on now?

Envying physics, fantasizing that character/subword-based models can imitate multi-scale entanglement renormalization ansatz, and admitting that I don’t know what I’m talking about.

  • What you hope to get out of the community?

Reality check.

  • Anything else you’d like to share

Thanks to @asir for introducing me this group. I haven’t physically met anyone else, partly because the latest event was cancelled. Looking forward to see you soon!


#4

Tell us a little bit about yourself

Hi! My name is Rheza. Was born and raised in Indonesia.
Came to Japan in 2012 for short-term exchange and back again in early 2015 to do Master course.

Why you’re interested in Machine Learning/Deep Learning?

For me, it’s interesting to see how the computer “understand” and process images.
since then, I’m into machine learning.

What are you working on now?

Mainly I’m working on reader engine (kindle alike) for mobile devices.
Also, some projects related to ML (Image Recognition) but not spending much time there.

What you hope to get out of the community?

My main task right now is not related to ML, so I hope I still keep updated with recent research in ML and perhaps make something with it!

Anything else you’d like to share

I was working on signature recognition for mobile devices and image recognition for an imbalanced and small dataset.
If anyone interested, let’s discuss together!! :smiley:


#5

Tell us a little bit about yourself

I’m Jorge, I was born in Chile and lived there until April 2016, when I came to Japan as a research student. My background is Industrial Engineering, but did my undergrad’s thesis in NLP related stuff (sentiment analysis). Since then I have only worked on NLP while at the same time learning all the computer science concepts I didn’t learn as an undergrad.

Why you’re interested in Machine Learning/Deep Learning?

I’m interested in learning about how humans think through teaching machines to do so. Like Chomsky said in “Language and Mind”:

Personally, I am primarily intrigued by the possibility of learning something, from the study of language, that will bring to light inherent properties of the human mind.

What are you working on now?

I’m studying how different ways of combining word representations coming from different hierarchies (pre-trained word embeddings; characters), affect both the final word representation and downstream sentence representations.

Similar to this and this paper.

What you hope to get out of the community?

Sharpen my public-speaking skills, get to meet people interested in these topics outside of academia, share what I’ve learned, learn from others :smiley:

Anything else you’d like to share

Besides from Machine Learning, I’m also interested in software engineering. I’ve actually been thinking about beginning a blog bringing these two worlds together (I’m partly through the first post in which I teach readers how to use tmux for ML research). If there’s anybody interested in this kind of topic please let me know… Knowing that there’s people interested might be the motivation I need to finish the post and write more :sweat_smile:


#6

Besides from Machine Learning, I’m also interested in software engineering. I’ve actually been thinking about beginning a blog bringing these two worlds together (I’m partly through the first post in which I teach readers how to use tmux for ML research). If there’s anybody interested in this kind of topic please let me know… Knowing that there’s people interested might be the motivation I need to finish the post and write more :sweat_smile:

I’m involved and interested in software engineering side of things and most of my job is to juggle the two, with more emphasis on software/platform engineering than ML for now.

Since the contact surface of these fields is so large and actively worked on, I am very interested in hearing the specifics of what you are working on. Could you please share your ideas/work here or in a post if possible? :slight_smile:


#7

I’m not working on anything specific besides the tmux blogpost, and convincing my collaborators that good software engineering practices, despite making research progress a bit slower, are beneficial in the long term.

With “good software engineering practices” I basically mean things like:

(the list is not exhaustive)

  • Good code readability: When writing code, consider you’re writing it for other people to read (even if it is only for you in the future), and not only for the computer.
  • Don’t Repeat Yourself ™️ (aka DRY principle). Also related with design patterns, encapsulation, and avoiding spaghetti code.
  • Using tools that have been proven to streamline software development. For example having a basic to intermediate understanding of how git works, and its differences with github.com. Understanding what branches are, knowing how to merge or rebase them, etc. Knowing when it’s better to use Jupyter notebooks vs. when it’s better to write code using an IDE such as PyCharm.

Some books that I love on this topic are:

In other words, I’d like to transmit the idea that we should be aware that there is a discipline, software engineering, older than Deep Learning and Machine Learning, that deals with how to make good quality software, and has lots of knowledge that we can use for making research more accessible and easier to replicate.


#8
  • Tell us a little bit about yourself
    I working Nihonbashi, Tokyo with TOYOTA Research Institute Advanced Development (temporary transition from TOYOTA Motor Corp.), and Doctor course student a Keio University from last year.

  • Why you’re interested in Machine Learning/Deep Learning?
    Autonomous driving requires intelligence rather than rule based operation.

  • What are you working on now?
    Develop deep learning processors.

  • What you hope to get out of the community?
    Understand basics of deep learning and feed back to our development.

  • Anything else you’d like to share
    I seek new position to develop deep learning processors.
    And I am interested in graph processin, graph neural network might generate hardware design, so I want to dive into the graph neural network.


#9
  • Tell us a little bit about yourself
    Hello Everyone! My name is Aaveg Barole (アーベガ バロレ). Was born and raised in INDIA.
    Came to Japan last year(August 2018) I have 3 Years of Experience in IT Industry. In India, I have worked in NTT DATA and Fujitsu as a Software developer.

  • Why you’re interested in Machine Learning/Deep Learning?
    As we all know Machine Learning and Deep Learning are very popular these days, I am very much interested in gaining knowledge on them and in the future looking forward to working as an AI engineer. I was a Java and C# Developer back there in India but was interested in python language so I studied python on my own using online courses and applied python knowledge to complete the small tasks in projects.

  • What are you working on now?
    Currently, I am working on C# and python(Deep learning) @ NTT Data Japan (Working on the client location)

  • What you hope to get out of the community?
    I want to learn more on the latest topics/technologies and want to grow my AI Knowledge more and more.

  • Anything else you’d like to share
    I like to socialize and love to talk on the latest technologies
    I know Japanese :slight_smile:


#10
  • Tell us a little bit about yourself
    Name: Emil.
    Current position: PostDoc at UTokyo.
    From: Serbia and Hungary.

  • Why you’re interested in Machine Learning/Deep Learning?
    My background is High Performance Computing. I find ML amazing, and I want to contribute by making the algorithms faster, maybe apply it in HPC and make algorithms optimize themselves - the “holy grail” would be a spiral of ML improving HPC and HPC improving ML resulting in the fastest possible code on the given hardware.

  • What are you working on now?
    Different ways to store and compress the parameters of neural networks.

  • What you hope to get out of the community?
    Problems! If you have any problems with slow algorithms, where the algorithms are not fast enough, don’t jump to simplify your model immediately - contact me, I’d be happy to attempt to make the unfeasible feasible.

  • Anything else you’d like to share
    Emacs, ArchLinux, Judo.


#11

Hi Emil,

I got one probably too broad and indirect to HPC:
https://github.com/clab/dynet/issues/399
In short, undeterministic cuDNN parallel computation makes already unstable gradient decent even unstabler. Not sure if any deep learning framework solved the problem, except Chainer got a somewhat stabler yet much slower one by disabling cuDNN’s parallel task.


#12

Thanks! Making a note! I’ll let you know if I figure out something!


#13

Hi EMil,

Different ways to store and compress the parameters of neural networks.

Are you referring to methods similar to “DeepCompression” by Song Han? We were working on similar stuff until we changed the track this year.
We did some experiments on model compression and how sparseness (compression) impacts robustness to adversarial attacks. We moved onto other problems as current DNN models are compact enough and performs much better than some of the larger models. For example, MNasnet models [1] designed with network architecture search are small enough and has pretty good accuracy for image classification.
[1] https://www.tensorflow.org/lite/models