AI Ethics
- Together with the development of narrow and general intelligence research we need to advance the field of AI Ethics to understand the moral, ethical and philosophical impacts that Artificial Intelligence has on society. The development of this new and growing field has the highest chance of succeeding if we all take part in shaping it. Demanding an interdisciplinary approach is a shared perspective among many researchers in the field. Recent work includes:

- A framework released by the Oxford Internet Institute. They make the analogy to bioethics and extend that framework by asking a critical question - "are we (humans) the patient, receiving the “treatment” of AI, the doctor prescribing it? Or both? It seems that we must resolve this question before seeking to answer the question of whether the treatment will even work."

- The Ethical OS Framework developed by The Institute for the Future and the Omidyar Network. It is a toolbox and guide for startups and VCs to help them consider the ethical implications of their products.

Language
- I've been fascinated by linguistics and cognitive science. Languages are essential for human intelligence and the way our cognitive capacity allows us to continuously come up with new types of knowledge representations. AI research has demonstrated that we can create autonomous agents that cooperate to achieve a common goal and in the process invent their own language. By taking an interdisciplinary approach, we can experiment with new kinds of probabilistic programming languages that learn from the comparative power and flexibility of human languages. I think we are in the process of inventing new types of language interfaces(even games) that will change how we interact between each other, within each other and with our technology.

- In their recent work "Building Machines that learn and think like people", Lake, Ulmanm, Tenenbaum and Gershman prototype the building ingredients for moving AI research towards human-like learning, informed by compositionality, causality and learning-to-learn. Most of all, they suggest that deep learning and other computational paradigms should aim to use as little training data as people need, and also as a research community, we should evaluate models on a range of human-like generalizations beyond the one task the model was trained on.

Environmental Sustainability
- Fascinated by the concept of the Intelligent Biosphere developed by Drew Purves from Deep Mind, I think technology should bring the environment closer to people. By creating new types of sociotechnical feedback loops we could realize the microscale and macroscale impacts of our actions on the environment as well as learn from Nature and biomimicry to achieve higher levels of cooperation between each other through our technology.