This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
people:david [2022/02/16 09:10] David Peer |
people:david [2023/04/19 15:34] David Peer |
||
---|---|---|---|
Line 3: | Line 3: | ||
==== PhD Student ==== | ==== PhD Student ==== | ||
- | I'm a Ph.D. student in the Intelligent and Interactive System group at the Universität Innsbruck (Austria) under the supervision of Antonio Rodriguez-Sanchez and I'm working at DeepOpinion as a machine learning researcher. I obtained the degree of M.Sc and B.Sc in Computer Science at the Universität Innsbruck (Austria) and study currently the gap between expressivity and learnability of neural networks. At DeepOpinion I study how we can transform this knowledge into novel AutoML or Neural Architecture Search (NAS) algorithms for Natural language processing (NLP) models to outperform current SOTA methods. | + | I finished the Ph.D.at the Intelligent and Interactive System group at the Universität Innsbruck (Austria) under the supervision of Antonio Rodriguez-Sanchez and I'm working at DeepOpinion as a machine learning researcher. I obtained the degree of M.Sc and B.Sc in Computer Science at the Universität Innsbruck (Austria) and study currently the gap between expressivity and learnability of neural networks. At DeepOpinion I study how we can transform this knowledge into novel AutoML or Neural Architecture Search (NAS) algorithms for Natural language processing (NLP) models to outperform current SOTA methods. |
[[https://scholar.google.com/citations?user=THmkZOIAAAAJ|Google Scholar]] | [[https://scholar.google.com/citations?user=THmkZOIAAAAJ|Google Scholar]] | ||
Line 17: | Line 17: | ||
=== Areas of Interest === | === Areas of Interest === | ||
---- | ---- | ||
- | My main interest is to improve state-of-the art results in machine-learning by understanding the gap between expressivity and learnability of neural networks that seems to be huge. Here is a list of the publications in this research direction where I'm first- or co-author where we used different theoretically motivated algorithms to improve SOTA: [[https://arxiv.org/abs/1905.08744|1]], [[https://arxiv.org/abs/2011.02956|2]], [[https://preregister.science/papers_20neurips/55_paper.pdf|3]], [[https://www.sciencedirect.com/science/article/pii/S2665963821000014|4]], [[https://arxiv.org/abs/2103.04331|5]], [[https://arxiv.org/abs/2102.11944|6]], [[https://arxiv.org/abs/2105.14839|7]] [[https://arxiv.org/abs/2104.07393|8]] [[https://arxiv.org/abs/2201.11091|9]]\\ | + | My main interest is to improve state-of-the art results in machine-learning by understanding the gap between expressivity and learnability of neural networks that seems to be huge. Here is the list of the publications: [[https://arxiv.org/abs/1905.08744|1]], [[https://arxiv.org/abs/2011.02956|2]], [[https://preregister.science/papers_20neurips/55_paper.pdf|3]], [[https://www.sciencedirect.com/science/article/pii/S2665963821000014|4]], [[https://arxiv.org/abs/2103.04331|5]], [[https://arxiv.org/abs/2102.11944|6]], [[https://arxiv.org/abs/2105.14839|7]] [[https://arxiv.org/abs/2104.07393|8]] [[https://arxiv.org/abs/2201.11091|9]]\\ |
=== Positions === | === Positions === |