by Ashkan Kazemi (PhD student – Michigan AI)

NOTE: The original post is live on Meedan’s website: https://meedan.com/blog/claim-matching-global-fact-checks-at-meedan/

The Association for Computational Linguistics conference (ACL) 2021, a top publication venue and event for research in natural language processing (NLP), happened virtually from August 1-6, and I was fortunate to present our own paper at the conference: “Claim Matching Beyond English to Scale Global Fact-Checking”.

From the opening remarks, everyone knew that a major theme of the conference would be large language models and according to program committee co-chair Roberto Navigli, BERT-based models were the most prevalent topic in this year’s proceedings. The rest of the conference came with a fresh and critical perspective, from the opening remarks to discussions around ethics, CO2 emissions and social good in natural language processing (NLP). The conference reminded me of Dr. Strangelove, a cinematic masterpiece by Stanley Kubrick about the threat of nuclear war and how thoughtless actions from a small group of impactful people could endanger humanity. While in the past scientists only speculated about the threat of Artificial Intelligence to humanity, the AI we feared already exists. It was heartwarming and hopeful to see the computational linguistics community engage in these conversations before we hit the point of no-return on harmful AI and language technologies.

ACL Presidential Address

One of the opening talks of the conference was Professor Rada Mihalcea’s presidential address as the 2021 Association of Computational Linguistics president. Coincidentally, she is also my PhD advisor!

Dr. Mihalcea called on the NLP community to “stop chasing accuracy numbers” and expressed that “there is more to natural language processing than state of the art results.” She rightfully pointed out that neural networks have taken over a large part of NLP even though they have major shortcomings such as lack of explainability, concerning biases and large environmental footprints that current NLP benchmarks overlook. The speech followed a year of heated discussions around the ethics and implications of large language models that peaked when Timnit Gebru was fired from and harassed by Google after submitting a paper critical of large language models to the ACM FAccT conference.

Large language models (LLMs) like BERT have transformed natural language processing and played a major role in recent advances in AI technology: powering personal assistants like Siri and Alexa, automating call center services and improving Google search. There are no silver bullets, however, when it comes to these models. A 2019 paper found that training a large Transformer-based language model with neural architecture search emits 5x the CO2 emissions of a car in its lifetime. While a large number of ACL 2021 participants used pretrained LLMs, recently released LLMs like OpenAI’s GPT-3 or Google’s T5 cannot be trained on an academic budget, resulting in a monopoly over this impactful research trend by big tech companies. This is all made worse by the biases encoded in these models including stereotypes and negative attributions to specific groups that make their widespread adoption dangerous to society.

Together with the major limitations of neural networks mentioned in Dr. Mihalcea’s presidential address, we can see that amid the hype surrounding transformer-based language models, they are likely still long away from obtaining human-like intelligence and understanding of language. To quote an anonymous survey respondent from Dr. Mihalcea’s research, “[NLP]’s major negative impact now is putting more money in less hands” and we should be changing our focus from improving accuracy numbers to factors such as interpretability, emissions and ethics to build language technology that benefits everyone, not just a handful of powerful companies.

Green NLP Panel

Green NLP is a community initiative inside ACL that aims to address environmental impacts of NLP. A panel of academic and industry researchers moderated by Dr. Iryna Gurevych discussed these impacts on the second day of the main conference. Dr. Jesse Dodge started the panel with a presentation on “Efficient Natural Language Processing”, an effort to encourage more resource-efficient explorations in NLP and address some of the challenges creative and low budget research face when publication venues are dominated by research around LLMs, without slowing down the impressive progress of NLP seen in recent years. Throughout the panel, many interesting points were brought up by the audience and the panelists.

Time and again, panelists expressed their concern for lack of access to resources required to reproduce a lot of the papers presented at ACL conferences and some even called for interventions. Dr. Mona Diab expressed how instead of the current “Black Box” approach to NLP through the use of LLMs, we should move towards “Green Box”, efficient NLP that is easily reproducible and accessible to a broader and more diverse group of researchers, eventually resulting in democratization of NLP while in parallel reducing emissions of our research. Others pointed out that the current setup in NLP discourages competition from academia and moving towards a more green, efficient NLP could mitigate that and increase creativity in the community’s research output.

The panel ended with a question from the audience, asking how energy use and emissions of NLP compare with cryptocurrencies and Bitcoin. One of the panelists Dr. Emma Strubell elaborated that we simply don’t have an answer to this question yet. While Bitcoin’s energy consumption currently can power up Czech Republic twice, there are active efforts in place to reduce emissions from cryptocurrencies that were simply made possible through measurement, something the NLP and AI community may be lacking behind. There is a lot to be done to ensure NLP is democratized and environmentally safe, but community initiatives like Green NLP spark hope that these hopes could become a reality.

NLP for Social Good

The theme track for the conference, a workshop and a social event shared the same topic: NLP for social good: an effort from Association of Computational Linguistics to nurture discussions around the role of NLP in society. These discussions included efforts to define what “NLP for social good” means, to identify both positive and negative societal impacts of NLP and to find methods for better assessing these effects. Dr. Chris Potts’ last keynote of the conference “Reliable characterizations of NLP systems as a social responsibility” offered detailed and fresh directions for NLP systems that could decrease adverse social impacts of these systems and train and build models that promote performance towards our collective social values.

At the NLP for Social Good birds of a feather social event led by Zhijing JinDr. Rada Mihalcea and Dr. Sam Bowman, a friendly conversation started around questions of community building, current NLP for social good initiatives and directions for the future to develop NLP with positive social impact. The consensus on defining “social good” was to go with a loose and broad definition, as long as we don’t overstate the impact of the research as computer scientists sometimes do. Topics such as NLP for climate change and preserving indigenous languages were brought up as research initiatives that the NLP for Social Good community could focus on in the near future. I unfortunately could not attend the “NLP for Positive Impact” workshop, but I encourage the readers to check out their proceedings.

Ending on some favorite NLP + CSS papers from the conference

Based on their summer 2021 readings, our graduate students recommend:

Andrew Lee (3rd year PhD student in the LIT lab):

1. One Hundred Years of Solitude by Gabriel García Márquez is a mystical and captivating, dream-like novel. Looking back, what I appreciate most about this book is the wide range of stories that the Colombian author takes his readers through, from intimate, personal, and cherishable familial episodes all the way to civil war, revolution, and even violent massacres. Such a wide range of themes left me with a wide spectrum of emotions as well. The novel is full of mystery and magic, while feeling very relatable and true to life. The novel is also full of unique and colorful characters – one moment you may find yourself rooting for the noble cause of a “protagonist”, followed by unusual character developments that turn them into unrelatable and incomprehensible antagonists. Throughout the novel, I was so immersed with the characters that by the end, I was left with a feeling of vacancy. For anyone looking for something unique, whether it’s story, story telling, characters, character development, themes or premises, “One Hundred Years of Solitude” is a classic worth reading.

2. For anyone looking for self-improvement, The 7 Habits of Highly Effective People by Stephen R. Covey may also be an excellent choice. In this book, the author walks through 7 habits that he believes everyone should build. Perhaps you are rolling your eyes, thinking “Great, another self-help book”, and I understand where you are coming from, but what this book has to offer is genuinely useful and applicable takeaways for everyone. Because of its abundance of advice, it is a book that is refreshing every time I come back to it, with new perspectives and takeaways each time. Perhaps one of the key takeaways for me was the power of habits. Although they may be simple, routine thoughts or exercises, over time they can really accumulate into building one’s character. The book also gives the reader a chance for self-reflection and to be honest with themselves. What are the areas that I struggle with? What areas can I improve on? Taking a minute or two to reflect on one’s own character naturally makes way for new healthy habits to form. Reading the book is the easy part — implementing the habits is where the challenge comes. Building new habits is not easy, but as the Roman poet Ovid says, habits change into character.

Do June Min (2nd year PhD student in the LIT lab):

3. The Linguistics Wars by Randy Allen Harris – As a natural language processing (NLP) researcher, I often feel that my understanding of linguistics is too shallow. Although we freely borrow ideas and concepts from the field and present research work inspired by theoretical and experimental linguistics, I feel that currently the center of gravity of cutting-edge NLP research mainly lies on the engineering side. Granted, there is computational linguistics, which is the other crossbreed between linguistics and computer science, and it more explicitly inherits and recognizes the legacy and framework of academic linguistics. 

This was part of why I picked up this book: to obtain a view of how linguistics has evolved through exchanges and debates between scholars who often challenged the norm and advanced revolutionary ideas. After finishing the book, I may not have retained all of the ideas introduced, but I can say I now more clearly see how linguistics and NLP researchers are fundamentally engaged in the identical goal of figuring out what language is and how we use it to make and convey meaning. Roughly, NLP and AI researchers work from the bottom up and present models of language using the state-of-the-art tools that they think have the best capacity to mimic and model human communication. Conversely, linguistics try to reason, both empirically and theoretically, what the models must look like, given what we know about how humans actually use language. I won’t go into the details about the book, except that the author unravels an engaging story about how linguistics was galvanized and revolutionized by the legendary Chomsky and his equally impressive colleagues and proteges. Plus, there is some drama – “Wars!”

4. How to Change Your Mind by Michael Pollan – Before I came to the U.S, my idea of ”drugs” didn’t differentiate between marijuana, methamphetamine, or psychedelics. After I studied here for a few years, my view of drugs became somewhat more nuanced; I had friends from undergraduate years who smoked marijuana regularly who seemed completely fine and functional, even exemplary in many regards. Perhaps not all drugs are that bad, I had concluded. Still, life in America strengthened the idea of the “big bad drug” that destroyed lives and families. Countless people were addicted to heroin, methamphetamine, or you name it, and I could see them wandering and suffering in the streets of major cities. ”Drugs” was the cause.

Then, I was introduced to this book in a health/lifestyle podcast I occasionally listened to. Author Michael Pollan appeared himself as a guest, and laid out the synopsis of the story he tells in the book. In short, the idea of narcotics as commonly imagined by people like me is political and fluid in nature, and this unfortunately stops us from exploring various natural and synthetic substances as tools for medical treatment and improving the quality of life. I was most impressed by Dr Roland Griffiths’ finding that terminal patients reported having an easier time accepting the prospect of dying after a psychedelic drug was administered under a therapeutic, controlled setting. Some may think that this feels insubstantial compared to an effective treatment of the terminal disease itself. However, to me it seems like a great way to help patients overcome their fear and anxiety and spend the last moments of their lives in peace, a priceless gift. 

This book definitely changed my mind in how I think about drugs. I became aware of the differences between substances that are deemed illegal or problematic. Also, I am hopeful that in the future we are able to walk back the hasty decisions and judgements we made on some substances and embrace them as tools at our disposal, not as nefarious corruptors like I had once thought.

5. The title of this novel, Submission, is a dual reference to the religion Islam (meaning submission to God in Arabic) and the (imagined) demographic and political surrender of the West to the “hordes” of immigrants from third world countries. The plot revolves around an aging, emmasculated literature professor in France who witnesses a pivotal moment in French politics when a left-coalition forms to elect a muslim presidential candidate backed by the rapidly growing muslim population in France, named Ben-Abbes, in order to stop the ultra-rightist Marine Le Pen from being elected as President of France. The plot is far-fetched, at least now in 2021, and sounds comical, but nonetheless Houellebecq trudges on to have the protagonist submit to a new French order where he is persuaded to convert to Islam and teach in a male-only institution, with a stable income and a new family as reward of his spiritual rebirth.

I’d like to clearly state that I don’t agree with Houellebecq’s politics. He is a self-proclaimed islamophobe who thinks that women’s liberation did irreversible damage to the Western world. What I do appreciate about his novels then, is his brutal honesty in depicting how a good chunk of the well-off world are increasingly feeling attacked and out-of-place in the rapidly changing society. Houellebecq doesn’t sugarcoat or tone-down the incel-ish frustrations thought and voiced by the protagonist to make him more palatable and he is clearly the author’s surrogate. With various social justice movements springing up and previously disenfranchised peoples being given more attention (although mostly confined to media coverage and social discourse, rather than actual policies and material conditions), the emasculated literature professor feels that progress will leave him deprived of social standing and prestige, or even insurance of minimum quality of life. This paranoia may be imagined and the complaints look rather whiny and one-sided, but at least it reflects how a significant section of the developed world thinks, as evidenced by the rise of right-wing and nativist parties and general discontent against the status quo, globalism. 

It seems to me that there is no easy solution to this growing schism. Leaders of the wWest or the third world alike are stoking the fire for their own gains instead of finding a path of reconciliation. Thus the seemingly farcical conclusion of the novel reads as Houellebecq’s way of admitting that neither the fascist fantasy of nativist purification nor the liberal wish of the woke movement educating the hate and racism away are plausible solutions. 

My Experience with the ExploreCSR Program

by Grace O’Brien

Hi! My name is Grace O’Brien and I’m a rising senior at the University of Michigan. I am majoring in Pure Math and Spanish and minoring in Computer Science. This past semester I participated in Explore Computer Science Research (ExploreCSR), a program sponsored by Google in collaboration with Girls Encoded. ExploreCSR is a program designed to introduce undergraduate students from underrepresented groups to research in computer science and help build their confidence.

As the Fall 2020 semester began, I planned out my schedule and realized I would have time for another activity or project. As a math major, I have had experience with pure and applied math research, but I still wasn’t sure what type of career I wanted to pursue. I also greatly enjoyed my computer science and Spanish courses, so I started investigating how I could combine these various interests. After a bit of searching, I discovered the field of Computational Linguistics and Dr. Rada Mihalcea’s Language and Information Technologies (LIT) lab. I emailed Dr. Mihalcea to learn more about her research, and she suggested I apply to ExploreCSR.

ExploreCSR immediately interested me because navigating the world of computer science research has been very intimidating to me and ExploreCSR seemed like a great way to dip my toes in and gain some experience. I was excited to meet other women interested in computer science and connect with mentors in the field.

Once I was accepted to ExploreCSR, I was paired with my mentor Allie Lahnala, a PhD student in the LIT lab. Allie and I got along great right off the bat. We have very similar academic interests and she was even in the same student organization (STEM Society) as me when she was an undergraduate. She was extremely supportive and welcomed me to the program with lots of ideas for topics we could research together. After a few weeks of literature review and acquainting myself with the current research, Allie and I decided to work on a project related to Computer-Assisted Language Learning, specifically for Second Language Acquisition of Spanish. This project has been very interesting to me because it falls right at the intersection of my interests; I get to use skills from all my areas of study. We found a Subreddit called r/WriteStreakES where posters learning Spanish write short responses to daily prompts and native Spanish speakers reply with corrections and suggestions to help improve their writing skills. For example, here is a post by a non-native speaker and its corresponding correction:

Using the data from this source, we are working on building a model to predict if a sentence was written by a native or non-native speaker, and, if non-native, if the sentence has an error in it.  I started by reading through many of these posts to get an understanding of the data. I found that the most common type of error is mistaken adjective-noun agreement. Some native speakers give corrections like the above post, copying the text and making the necessary changes. However, others prefer to type a few bullet points with comments or explanations of how to use certain words. We decided that posts where the original text and the comment are well-aligned would be more helpful for us as we can directly compare correct and incorrect sentences. In order to filter the posts to fit this criterion, we use Jaccard Similarity and Levenshtein Distance. Jaccard Similarity measures what proportion of the words are the same between two strings of text. For example, the Jaccard Similarity between the following two strings is 0.5 as half of the total words are shared between the two sentences.

String 1: We ate dinner at the new restaurant.

String 2: We ate at a restaurant.

Number of words in common: 4

Number of words total: 8

Jaccard Similarity: 0.5

Alternatively, Levenshtein Distance measures the number of edits (in the form of insertions, deletions or swaps) needed to convert one string of text into another. For example, by changing B to K, R to N, and inserting S, we can transform “Bitter” to “Kittens” in three steps.

String 1: Bitter

String 2: Kittens

Levenshtein Distance: 3

These techniques allow us to pair a sentence written by a non-native speaker with the corrected sentence written by a native speaker, if one exists. Then, we train a binary classifier to identify which sentences were written by which type of speaker. Currently, our classifier has only about 56% accuracy so we are working to improve it by considering more linguistic features.

One of my favorite parts of ExploreCSR was the chance to connect with a mentor. I appreciated meeting Allie and learning more about what life as a graduate student looks like. Hearing firsthand what she is working on and the path she took to get here makes the prospect of attending graduate school seem much less intimidating. Unfortunately due to the virtual semester, I didn’t have many opportunities to meet other undergraduate researchers in the program. However, I really enjoyed listening to everyone’s final presentations. It was fascinating to see how many different types of projects students came up with. Among the 30 students, no two projects were the same which I think speaks to the breadth of topics within computer science. I particularly enjoyed the project investigating the implicit bias that is encoded into much of AI.

Although my career will likely not be closely related to my ExploreCSR project, participating in this program has definitely helped me develop research skills and build confidence as a woman in STEM. In the future, I plan to pursue a PhD in pure mathematics with the goal of working as a professor and researcher. I hope one day to be able to help organize a program like ExploreCSR to mentor the next generation of researchers so other women can benefit from the same opportunities I have had. Overall, participating in ExploreCSR was a wonderful experience for me and I’d recommend it to anyone interested in learning more about the wide variety of research areas in computer science.

About the ExploreCSR Program

by Allie Lahnala

When I began my PhD in CSE in Fall 2018, my advisor Professor Rada Mihalcea told me about a new program she was developing and asked if I wanted to be involved. That year, Google Research began offering exploreCSR (explore computer science research) awards at the start of the academic year to fund research initiatives that encourage historically marginalized students to pursue graduate studies and careers in computing research. According to the Computing Research Association’s 2020 Taulbee Survey, of the CS doctoral degree recipients in 2019-2020, 19.9% were female, and “the combined percentage of CS doctoral graduates who were American Indian or Alaska Native, Black or African American, Native Hawaiian/Pacific Islander, Hispanic, or Multiracial Non-Hispanic was 3.8 percent.” Prof. Mihalcea had just received an exploreCSR award with the idea to offer paid research opportunities in which students receive personal mentoring on a computing research project from experienced researchers (professors, research scientists, post-doctoral researchers, and senior doctoral students). The organizers within CSE would hold additional research skills workshops and socials for students to connect with each other (and eat tasty snacks). The program would provide an avenue for developing research skills and learning about careers in CS research through experience. 

I thought about how such an opportunity might have impacted me and all the things I wish I had known before the start of my PhD. The idea of doing research had always appealed to me, but as an undergraduate struggling to keep up with my computer science courses and the financial costs of university studies, I had the idea that graduate school and computing research were beyond my reach. I had not known an undergraduate doing CS research who might have been able to tell me otherwise, and I was not even aware that tuition is funded and a stipend is provided for PhD students, and in many cases for Master’s students doing research as well. So when Prof. Mihalcea asked if I would be interested in helping organize the program, I was immediately invested, and have been involved each school year since 2018.

Any undergraduate is welcome to apply, no matter if they are a senior with several upper-level CS courses under their belt or a freshman who is still deciding their major and how computer science will fit in. Mainly, we look for candidates who show a budding curiosity or developed interests in CS research and are motivated by the issues of representation in the field. The selected exploreCSR participants always have a wide variety of interests, ranging from core computing aspects to the arts, language, social sciences, and medicine. The mentors conduct computing research across disciplines at the university, from the EECS department, School of Information, and even the School of Music. With this variety of expertise, we try to pair each student to a mentor based on the student’s unique interests, in hopes that the students will research a topic that is most exciting for them.

At the end of the year, we hold a celebration where the students present what they have learned about computing research and are invited to share their work. Across nearly thirty students, there are nearly thirty new research challenges that most in the audience learn about for the first time. That is, each student not only embarks on their own exploration of computing research, but they also demonstrate to each other that there are a plethora of scientific challenges that require more minds like their own.

by Gemmechu M. Hassena

About the author:

Hi, my name is Gemmechu M. Hassena! I’m a senior year software engineering student at Addis Ababa University. I came to work at the University of Michigan through the African Undergraduate Research Adventure program (AURA) in 2020. Throughout my time at Michigan, I’ve worked on a Computer Vision project titled “Scene understanding using humans as a ruler” with Michigan AI Lab members Prof. David Fouhey and Ph.D. student Christopher Rockwell. Our research aims to identify floors and scenes by first identifying people in videos, and then calibrating the size and depth of the scene based on their height. To learn more about our system, watch my presentation detailing our work:

Rather than explaining my research, this blog post details how I got here and how I succeeded as a remote student during the pandemic :).

About AURA: The African Undergraduate Research Adventure (AURA) program is a research exchange program for undergraduate students at the Addis Ababa Institute of Technology (AAiT), Ethiopia. The AURA program was founded by Prof. Valeria Bertacco, Prof. Todd Austin, and Prof. Fitsum Andargie to create collaborations between UM faculty and AAiT students, which could lead to a range of research collaborations. Through the program, students come to Ann Arbor for 12 weeks during the summer to engage in research work with a College of Engineering faculty member.

My Story 

I have lived across Ethiopia, in different cities and many many houses. We’ve lived in places for less than 2 months, modern-day nomads in short. But wherever I go one, thing is always the same, I will be called on to fix computers, radios, mobiles, or any electronic device that needs repairs in the neighborhood. 

Growing up in Ethiopia to learn technology was not easy. The electricity cuts out often. The internet is found only in big corporations, government offices, or internet cafes (which are expensive and slow). With every access I had to computers and other electronic devices, I grew more curious to learn about technology. I taught myself how to edit photos and videos, and this later became the inspiration for me to pursue Software Engineering as a career.

I hear people who come from other countries complaining, about things which for me are quite normal. Let me tell you, and you can be the judge of whether it sounds normal to you or not. In Ethiopia, the electricity will go and come as it wishes, and sometimes it may be gone for 2 weeks straight for no reason at all. And the same goes for water. Is this normal to you?

Yet because of this, everyone is more prepared. Everyone knows that if the electricity is out, one must prepare candles and cook with wood or charcoal. Everyone knows that when water is available, one must fill their reserve tanker.

But most often I hear complaints about how the Internet is slow. Yet I don’t know why people complain so much. It seems people don’t approach slow internet with a backup mentality as they would for an electricity outage or water shortage. Although there’s no equivalent to switching to wood/charcoal or using a reserve water tank, one can walk away from the computer and come back later. And the trick here is that if you want a good internet connection, the basic thing should be to wait until midnight, once everyone’s asleep. This has been my strategy and rather than getting frustrated, I just see it as similar to preparing for outages with wood or a filled reserve water tank. 

Pre Program

excitement

When I first received an email that I was accepted to be one of the participants of AURA2020 I was super happy and excited. Knowing how AURA2019 had opened a lot of opportunities to seniors at my university, I was thrilled to get started. However, right when the program was about to start, COVID19 got very concerning and we were told we won’t be able to go to the US, and that AURA2020 was instead going to be held online. As happy as I was that the program was not canceled, I was also very worried about it being virtual because I didn’t have an internet connection at home. Living at the edge of the city meant that internet service did not reach our house. Discouraging as the situation might have been, I started thinking of ways that I can still be part of the program and thankfully things started to look up.

One morning during breakfast my sister said “My mobile card is consuming a lot whenever I turn on the data, can you fix it?”, and she handed me her phone. I couldn’t believe what I was seeing, her phone was connected to 4G! I found this very surprising because 4G internet was not provided to my neighborhood.

This turn of events gave me hope. I knew that my internet problem could be solved if I just bought a monthly internet package for 2000birr (55USD). Unfortunately, this was more than I normally would spend on food and transportation while at university! However, since schools were closed due to COVID, I decided to invest in the internet package and take part in AURA2020. I didn’t tell my parents I was accepted to a summer program at UMICH, because I worried they would realize we need internet and then think the 2000birr I was spending was way too extravagant. I contemplated telling them why I really needed the internet because surely they would have been happy to invest in my education. But I eventually decided against telling them because I knew telling my parents also meant that I had to explain to my large family. And when I say large, I mean large. I have 73 uncles and aunts! Explaining it to everyone was sure to be dramatic. 

It was a lot of pressure to invest in the first month’s internet package with no knowledge of how I was going to pay for the rest. But fortune does really favor the bold. Fast forward a month,  the internet had worked its magic on my parents and they had become addicted to having everything at the tip of their fingers. And so they agreed to cover our internet expenses. Ecstatic at the sight of things working in my favor, I was ready to start my journey on the AURA2020 program.

The Freaking Out

man thinking

When I started the program I really didn’t know much about ssh or remote working on a server. So at first, I couldn’t figure out how to set up my workstation. Once I did that I realized it was not the optimal way of doing it. To give you a glimpse of what it was like, I was only working on the terminal editing code with a lag of around 2 seconds after every line I typed. On top of that, downloading was not an option. I tried downloading files through the terminal and it would run at 12KB/sec meaning it will take forever to download the dataset I was working on. 

The lab I joined also has a tradition of giving a starter project when a new person enters and finishing the project took me more than 2 weeks, a longer than expected time. I felt horrible thinking that if this project took me so long, what might I imagine the timeline on my main project would be? Right after finishing the starter project, we started the main project and I started working on completely new technologies I had never even heard about. I was overwhelmed and with my slow connection and terminal editing, it was taking me forever to get even simple tasks done. I asked around and people in the lab suggested VS code to me. It was good but with my connection cutting off frequently, I was better off using Vim (Terminal editing tool).

One of the things I wanted to get out of this experience was to prove to myself that I could do AI. I wanted to get the exposure that would enable me to see if I would enjoy doing AI full time. I was disappointed in my progress and I thought maybe AI was not for me.  In spite of all this, my mentors, David and Chris, were always positive and appreciative of my efforts and gave me good direction.

 

The Learning Begins

man studying

In the middle of using Vim as my regular editing tool, I searched if I could use a Jupyter notebook in a virtual setup. I was so happy after I learned it was possible and this became a turning point for me. Suddenly I could write faster and see results more quickly. But the notebook still had disadvantages. I couldn’t train models that took more than 1hr because my network would definitely cut off and I’d have to start all over again. It was during this time of dilemma that I was introduced to Tmux by my labmate Richard Higgins. Tmux works like magic, it runs on the server all the time and my local machine could connect whenever it needed to and attach to whatever state I left it in last time. 

Another thing that made my slow internet hard was remote collaboration. I’ve said my connection was slow multiple times, but let’s finally talk figures. In theory, my connection should have been very fast, as it is a 4G LTE connection that will go up to 36mb/s.  The problem however is that our area has a very weak cellular network signal as it is a new area where infrastructure is being built. As a result, it was really hard for me to stay in a zoom call for long because my internet would eventually cut off. During which time either I couldn’t hear what my team was discussing or they couldn’t hear me correctly. Thankfully everyone in the lab was very understanding and patient with me whenever this happened.

Have you ever felt down and someone came along and said a few words that boosted your energy in a split second, as it turns out it was all you needed to hear? Even though I now had an almost good remote working setup, the fact that I was slow with the project and didn’t have good results had decreased my self-confidence. It was during this time that David said to me in one of our weekly meetings how smart I am, how hard-working I am. With his kind words, he lifted my spirit, basically pointing out things I did great. That really helped me to build my confidence back and motivated me to work even harder.

Unfortunately, during the middle of the summer, my country went into a total internet shutdown. During the first days, we were in total shock and thought it would be back in a few days. But a few days turned into 3 weeks and more than 1 month in some areas. This had mostly disadvantages but it also had some advantages for me. It gave me time to think about how I can get the most out of this program while enjoying the work. So I came up with a series called “Mini Ethiopia,” where before weekly meetings I would prepare a 1-minute video about my country, exploring topics in a fun and educational way. This turned our meetings into not only a time for me to learn, but also my mentors.

In addition, it gave me time to read papers on the subject matter and brush up my knowledge on computer vision through books. Once the internet was back I tried my 1min fun video and my labmates enjoyed it. David’s favorite video was one about how the movie 2012 was based on the Ethiopian calendar, and with COVID forcing everyone inside, it almost felt like it was the end of the world and only a few years later than predicted by the Ethiopian Calendar. I was also happy to see my pace increase in my tasks on my project.

The Mindset Change

Light bulb

At the end of the day, what was supposed to be a fun, exciting, and adventurous summer of my college career turned out to be exactly that but in a virtual setting. AURA played a big role in demystifying the Ph.D. to me and many of my classmates. A piece of advice I like to go back to is something my labmate Richard had said. He said, “think of us Ph.D. students as just normal students, but we now just also read a lot of papers”. I took the advice and started reading papers and engaging in a paper review study group with other AURA students. When David sent me his dissertation, I saw how the pieces of one’s research fit together into one big picture.

Conclusion

Through the AURA program, my peers and I learned how to apply to a Ph.D. program. We gained the skills and experience that helped us think critically about our career goals. With the encouragement of our advisors and the AURA committee, we applied to multiple Universities which we didn’t think were reachable prior to the summer. All in all, AURA2020 has changed our lives in ways we didn’t think were possible and we are very grateful for the opportunity. 

Cheers!

NOTE: This blog post has benefited significantly from the writing and editing help of Simret A. Gebreegziabher

by Max Smith

Challenges and Considerations

Artificially intelligent (AI) systems are increasingly enmeshed in our society. Hidden engines powering many web-services and social-media sites, they provide intelligent content recommendations to millions. They challenge the most brilliant human competitors at Go, Poker, and Chess — games foolishly thought so complex that only humans might perform well. They even participate in recreational activities such as painting and playing video games. Despite this growing resume of incredible accomplishments, AI systems still struggle with many tasks that are trivial for people. As we encounter AI systems more-and-more in everyday life, it is increasingly important to understand where they shine and where they struggle. Developing this common-sense understanding offers us a better working relationship between people and AI systems.

In this article, we focus on AI systems designed to take actions in a world and receive feedback on the quality of their performance. We refer to this class of action-taking AI as agents. Agents operate in specific worlds, often referred to as their environment. For an agent, the environment can be as simple as balancing a teetering pole on a cart, or as formerly difficult as identifying the angles between amino acids in a correctly folded protein (recent work has shown this to be viable!). Generally, we deploy agents into environments to solve problems that either humans do not know how to solve, or cannot solve well. In platform settings where there are clear markers of success for the problem under deliberation (e.g., winning a game of chess), the platform can provide feedback and rewards for meeting certain markers. 

Many of the greatest breakthroughs in AI have been systems in such a setup, where AI agents have shown outstanding performance against the benchmarks defined for their tasks. However, such benchmarks do not consider the impact their actions may have on other agents or humans, nor the impact of others upon itself. Including other agents/humans in the mix dramatically increases the difficulty of training agents. In this article, we take a brief tour through just a few of the exciting and difficult problems involved with developing multiagent systems. These problems are meant to help one understand the state of the art in AI and give insights into how agents make decisions.

Credit Assignment

Recall back in primary school when you were put into a group project with randomly assigned members. Did you dread having to take-on a majority of the work, or were you excited at the idea of coat-tailing off your peers? Despite your teacher’s best effort, a student probably received a grade that they did not deserve as a result of the group component of the assignment. 

This flashback illustrates the credit assignment problem in multiagent learning. At its core, the problem is simply what actions taken, and by whom, resulted in receiving a reward? Was it the studying that you did the previous summer to prepare for this coursework that resulted in your high marks? Or perhaps, you enjoyed several full nights of watching television and also received high marks — negatively reinforcing your bad habits. In these last two cases, as humans, it’s hopefully clear how the choices taken affected the outcome. However, AI agents do not come with this intuition pre-trained and must instead learn to solve the problem alongside learning the causality of their actions. 

This difficulty is further exacerbated by the effect of everyone else’s choices in your group on the outcome of the project. Did your contribution to the project matter more or less than, or carry the same value as your peers? Which actions from which group members were good and should be repeated for the next project? When we were in school, rubrics helped serve as a stopgap to help understand what was done well; however, not all problems have a rubric. 

In summary, when an AI system is successful, we have to both figure out what actions caused its success, while also understanding how different agents acting in the environment also influenced and potentially unequally contributed or hindered the success.

Moving Target

Unfortunately, even if we could solve the group project credit assignment problem, it would not be a panacea for learning in multiagent systems. Our agent may see how others behave and adapt its own behavior. However, the other agents may also choose to change their behavior in response as well! We can think about this as our group mates realizing we are a slacker and deciding to change teams or no longer carry the whole group on their shoulders. 

This phenomenon is referred to as the moving target problem. At its core, it deals with deciding the best thing to do when other agents can change over time, considering both how other agents learn or when other agents are replaced. The term target refers to the best behavior, and it is moving (read: changing) as the other agents change. 

Let us first consider the case where we need to only interact well with one other agent, but they are also learning alongside us. One promising approach is to consider what the other agent may learn from the interaction, and account for that in your change. This reasoning can be extended infinitely, where agents consider N levels of how the other agent may respond to their change that they got from considering the other agent’s change and so on… This can quickly become overwhelming to think about, but solutions of this nature are being actively refined. You can think of it as nested hypotheticals, “if I do this, what will they do? And then what will I do? And then what will they do? Etc, etc, etc.”

https://imgflip.com/i/3zjzrc

Another approach is to not try and work as a team with any one particular agent, but instead try and figure out the best behavior in the worst case. If the other agents were to be swapped out with malicious, evil agents (unbeknownst to us), then our agent needs to steel itself for operating among these evil agents. This direction is inspired and implemented by learning objectives defined in Game Theory. 

Communication

The previous two problems, credit assignment and moving targets, focused on learning how to behave to meet a goal. There’s an even bigger problem on the table though, and it’s one that even humans have not mastered: communication. Communication is a broad field, and all of its complexities one might never imagine distilling into a simple blog post. Instead we will scratch the surface by discussing briefly two important points: verbal and non-verbal communication. 

Verbal communication is rife with ambiguities. Borrowing an example from an earlier Michigan AI blog post by Laura Burdick: if you read “the chocolate bar,” it’s not clear if this refers to a piece of candy or a bar in a restaurant that serves chocolate. All of these linguistic challenges faced by humans, must also be overcome by AI systems. Only preliminary work has been done on verbal communication between AI and humans, simply due to the extraordinary challenge it presents. Instead, research has focused on communication between several AI using artificially constructed tiny languages (with simple grammars). 

https://www.boardgamehalv.com/wp-content/uploads/2019/09/Hanabi_BoardGame-770×513.jpg

Non-verbal communication deals with how all of the actions you take reveal some information about your thoughts. The game Hanabi excellently highlights what non-verbal communication is and the complexities that underlie it. Hanabi is a 2-5 player card game where everyone works cooperatively to build a firework show. Each player is dealt a hand of cards, and what makes this game unique is that you turn your hand around so that all the other players can see your cards but you cannot. Each player has different parts of the “firework show” in their hands, and need to play each colour in order of ascending value (I play a Yellow 1, then you play a Yellow 2, then our third friend plays a Yellow 3) if they want to win. To play, each player chooses to either blindly add a firework (a card) to the show being built in the middle of the table, hint to another player about what’s in their hand, or discard a card. Giving hints is obviously valuable, so only limited hints are allowed. However, discarding a card can earn the team more hint tokens. The show is successfully completed when all five colours of fireworks have had their all five tiers of their firework played. There are a few catches to this situation. First, cards can only successfully be added to the show if a stack of the same colour fireworks containing all of the smaller valued fireworks already exists in the show. This means a red firework of rank three requires that the red firework of ranks one and two have already been played. Incorrectly playing a card causes the firework to explode and the team can only survive three explosions. The second constraint is that when hinting to another player about the cards they’re holding, there are only two choices of hints: pointing out all cards of the same color, or pointing out all cards of the same rank. 

A super rich non-verbal language has evolved in the Hanabi community surrounding what information may be implicitly communicated through the hint action. One such simple convention is that a player will keep their oldest card in the right most position and this will be the card that the player will discard — referred to as the “chop principle”. Another convention is the “high value principle” with the assumption that if a clue was worth giving, it should be interpreted as the highest value (best) available move for that player. 

The Hanabi challenge is to construct agents that can learn to pick up on these non-trivial implicit conventions. In particular, there’s a strong interest in having agents that can be placed with new teammates and quickly learn to adopt the customs of this group. This requires the agents to reason about the true intent of their teammates actions. Overall, this gives us a way to study the subtlety that exists in communication, possibly applicable to human communication. 

Social Dilemmas

Successful communication can breed a whole new slew of challenges because it allows more meaningful interactions between agents. One of the simplest examples of this is the prisoner’s dilemma. This classic situation involves two criminals who have been caught and a detective who is trying to get a confession. The criminals are brought into separate rooms and can choose to cooperate with the detective or remain quiet. If the two criminals both cooperate they will serve a medium sentence; if they both remain silent they will serve a short sentence. However, if only one confesses they get off for free and their partner is jailed for a long sentence. While this particular example doesn’t allow for communication between the two agents, the underlying struggle faced by the agents can be at the core of many interactions. 

Should our agent cooperate or be selfish? The answer isn’t always clear and will depend on a lot of externalities; for example: will the two agents interact again in the future? How risk averse is your agent? Is there any pre-existing agreement between the agents that might offer assurances that they both will remain silent? This problem and its many variations have been under investigation for decades, as despite its simplicity we are still learning more and more about this style of interaction each day. A successful strategy for these two players, if they were to repeatedly end up in a prisoner’s dilemma, is the “tit for tat” strategy. In this strategy each player takes the action that the other player took on the previous interaction. In fact, this strategy has even been found by AI agents who were trained to play this game. 

https://www.amazon.com/Gibsons-Games-62479-Diplomacy/dp/B00009W9JK

The field of AI has recently turned to the game Diplomacy as a benchmark for more rich social dilemmas. In the game of Diplomacy, seven players take on the roles of European powers during World War I and compete to take control over Europe. Notably, this game involves repeated rounds of private and public discussions between the powers and then movement of their forces. This benchmark offers a challenging problem for AI in both communicating with the other players and learning social skills. Can our agents construct alliances? Will they know when they’re going to get back-stabbed? Will they back-stab an ally? The emergence of these rich social interactions is an active area of study and interest. 

Conclusion

Through just looking at a few problems our AI systems face when interacting with other humans or agents, we can gleam a lot about the state of AI in society. We have seen many recent incredible advances; however, there are still many large hurdles that need to be overcome before Siri becomes as smart as the AI in the movie Her. Two-player games are not the world, and the world is complicated. 

Preeti Ramaraj (4th year PhD student)Diversity across different axes

Despite being a bibliophile all my life, I realized fairly recently that I had read books by less than 10 women authors and almost none by any authors of color growing up. I began following Roxane Gay on Goodreads and decided to do the BookRiot’s “Read Harder” challenge, both of which introduced me to books I would never have heard of or picked up on my own. That led me down a whole new path of experiences, a diversity of accounts, that enriched my reading experience more than I could have imagined. Here are three books from my new found list of authors.

Homegoing by Yaa Gyasi begins with two sisters who were born in Africa centuries ago and separated at birth. One sister marries a white man and settles in Africa, whereas the other is forced to come to America through slavery. This book traces the stories of multiple generations of both these sisters, who never end up meeting each other. Each chapter alternates between the snippets of a person’s life in each generation of each family. You don’t just end up reading their individual stories, but you also get a reflection of the time and society in which they lived. You end up encountering a lot of painful moments in black history, but that is not just it – the fact that you see so many generations being spoken about means that you also see their normalcies. No matter what you do, read this book. It is poetic, it is grand and it is an experience that will impact you.

Alok Vaid-Menon is a gender non-conforming activist and a performance artist. I came across them when a friend of mine sent me a photo they had posted. My first reaction was one of just surprise, not knowing how to process their atypical presentation. But the more photos I saw, the more I normalized it. But it made me realize that for all my ideas of acceptance, my understanding of gender had not even begun. Beyond the Gender Binary is a tiny book (64 pages) that describes the pitfalls of the gender binary, its historical origins and how it is high time we go beyond it. There were two arguments in this book that really hit me hard. One, the gender binary asks to put 7.5 billion people into one of two categories. Second, the gender binary exaggerates the difference between the two genders and minimizes the differences within them (and there are many). Alok Vaid-Menon describes how the gender binary is harmful for everyone — it forces you to conform to established behaviors starting very early, or be told and shown in myriad ways that you do not belong.
In the end, all this book asks for is space for everyone to live and let live, to not kill the creativity and differences that we naturally bring to the table as people. However, we are nowhere close to that simple-sounding goal. The book reminds you that people outside the gender binary (non-binary, trans, gender non-conforming people) have always been around, and yet the gender binary erases them, does not allow them to exist, let alone thrive in this world. I would highly recommend listening to this interview with them to supplement what you read in this book. 

El Deafo by Cece Bell
This is one of the most adorable books I’ve read in awhile. The author writes this comic book story based on her own childhood where she grew up deaf. Yes, this book helps dispel the stereotypes or rather, shows the stereotypes that people who are deaf are subject to. But honestly, it’s mostly a hilarious story of the author as a child with her huge imagination who has a good time with her friends. It’s a book that I would buy for a kid, but it’s also a book that I would recommend to adults. This is because people often forget that people with disabilities do have what one might consider “a normal life.” And as much as it is important to understand the challenges that people go through because of their circumstances, that is not all that defines them. I absolutely loved this book, it is super short and fun, and this child’s school antics will truly make your day.

Charlie Welch (4th year PhD student)

There are two books by Peter Wohlleben that I’ve recently enjoyed; The Hidden Life of Trees and The Inner Life of Animals. Both offered new perspectives on the experiences of the living things we share our planet with and fascinating facts like, for instance, how trees can recognize what insects are eating their leaves by the chemical makeup of their saliva and the trees will release pheromones of an insect that preys on their attacker in order to protect themselves. Some criticisms I’ve heard of these books are over anthropomorphizing and a lack of scientific clarity on coverage of some of the referenced studies. The foreword of The Inner Life of Animals actually described Wohlleben’s references to published articles in a way that made me think I wouldn’t enjoy the book. However, I found the studies and anecdotes presented in both books captivating. I think that it feels strange to anthropomorphize the plants and animals at times, but I think this is a necessary device by which Wohlleben offers perspective. It makes the reader think about how we feel, think, and interact with the world in similar ways and ask ourselves where the boundaries of those similarities are. Each plant and animal has a different way of perceiving the world which may be very different from our own.

How we see the differences is partly based on our perception. I didn’t know before reading The Hidden Life of Trees that trees have such a complex network of roots and fungi that allows them to communicate about where resources are, if predators are coming, and if other trees need help they will send each other resources through their root systems. Another factor contributing to how we see the differences stems from the optimization that comes from capitalist motivations. Among the many daily injustices done to animals throughout the world, Wohlleben points out for example, that many people don’t know how smart pigs are and how they seem to feel many of the feelings we do. If they did know then corporations would not be able to keep them in poor conditions and castrate them without anesthetic, which is expensive to give them. Amongst the animals I underestimated were ravens, who form lifelong relationships with friends and family and each raven has a distinct name that they will remember for years even if they don’t see each other. It’s facts like these that make the reader wonder what else we don’t yet understand. There is still much to learn about the plants and animals around us.

Allie Lahnala (2nd year PhD student) recommends:

From my 2020 reads, I recommend “How to Do Nothing: Resisting the Attention Economy” (2019) by Jenny Odell, “Geek Heresy: Rescuing Social Change from the Cult of Technology” (2015) by Kentaro Toyama (who is a professor in University of Michigan’s School of Information), and “Race After Technology: Abolitionist Tools for the New Jim Code” (2019) by Ruha Benjamin.

While the books and their authors’ voices are quite different, the core themes connect and enhance each other in thought-provoking ways. Each book in some way attempts to combat the framing that productivity means creating something new (e.g., novel technology meant to correct social injustices), and offers ways to consider actions that are reparative, regenerative, and nurturing to be productive instead (e.g., social support systems and policy changes).

My year of books began with “How to Do Nothing,” which elicited both existential amusement and a healthy form of existential crisis. Jenny Odell proposes a seemingly paradoxical practice of doing nothing as an active process, one that challenges our notions of productivity and emphasizes stopping to seek out “the effects of racial, environmental, and economic injustice” to elicit real change (pg. 22). My favorite chapter, “The Anatomy of Refusal”, relates tales of quirky philosophers and performance artists whose minor refusals of societal norms invoked disruptive perplexity from others; it also recounts histories of social activism during the civil rights and labor movements that required collective and sustained concentration. Through these stories, the author paints a picture of what it means to “pay attention” at an individual and collective level in such a way that allows for the mobilization of movements (pg. 81). What resonated with me most about this chapter were the histories of activism powered by university students and faculty, such as the 1960 Greensboro sit-in in which participating students were “under the care of black colleges, not at the mercy of white employers” (pg. 83), which demonstrate the positions that academic institutions can afford to be in when it comes to taking activist risks.

Similarly, in “Geek Heresy,” Kentaro Toyama discusses mentorship as a more productive form of activism than novel technical packaged interventions, which are often ill-suited to the problems they intend to address, but nonetheless are idealized by technologists. Mentorship, he argues, is often neglected by policymakers and donor organizations, though it “works well as an overarching framework that avoids the problems of top-down authority, benevolent paternalism, or pretended equality (pg. 240).” The core themes of “Geek Heresy” emphasize nurturing people and social institutions as a means toward fighting inequality rather than developing technical fixes. He argues that fixes such as the development of low-cost versions of expensive technologies and their subsequent distribution to impoverished communities only address a symptom of the real issue, and can actually amplify the problem they are intended to solve. Such an instance demonstrates the “Law of Amplification,” which in an article in The Atlantic Toyama describes as technology’s primary effect of amplifying human forces. By this effect, “technology – even when it’s equally distributed – isn’t a bridge, but a jack. It widens existing disparities
(pg. 49).”

Aligned with the “Law of Amplification,” Ruha Benjamin argues in “Race After Technology” that “tech fixes often hide, speed up, and even deepen discrimination, while appearing to be neutral or benevolent when compared to the racism of a previous era (pg. 7).”  Benjamin’s propositions reflect concrete racial manifestations of such problematic attempts to intervene in social injustices with new technology solutions. She introduces the theory of the New Jim Code as “the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era (pg. 5).” Benjamin discusses numerous instances of racial fixes that, in ignoring underlying issues, either unproductively miss the point or actually turn malignant. Her solutions involve an abolitionist toolkit of researched strategies for resisting the New Jim Code which scrutinize technology development and deployment, and how we interpret data (pg. 192). Odell’s aforementioned active practice of “doing nothing” and Toyama’s advocacy against packaged technical interventions that lack long-term social commitments to the marginalized communities for which they are intended would both nurture the concept of an abolitionist toolkit.

Read these books for further insights and context into these aforementioned themes. You will encounter actionable suggestions from each author, from attention exercises that are as ubiquitously available as bird watching discussed by Odell, to social activism through mentorship as proposed by Toyama, and to supporting specific researchers and organizations that create tools for the abolitionist toolkit outlined by Benjamin. Also, checkout the recent study Critical Race Theory for HCI from Kentaro as senior author with his Ph.D. student Ihudiya Ogbonnaya-Ogburu a first co-author with Angela D. R. Smith and Alexandra To. They achieved a Best Paper award at the The 2020 CHI Conference on Human Factors in Computing Systems for this work.

Oana Ignat (3rd year PhD student) recommends:

Invisible Women: Data Bias in a World Designed for Men” by Caroline Criado Perez

This book is full of research facts on the gender data gap and its effects on women’s lives. Thanks to this book, I have learned how to recognize unconscious bias and how greatly spread this is in our society. Gender bias regards not only the pay gap, but it is also present in some unexpected areas like snow plowing, designing car safety tests, recognizing symptoms of a heart attack, or prescribing the correct medication. 
Women represent more than 50% of the world’s population and yet they are invisible for a great range of products and services. This is not due to bad intent, but rather to ignorance: when the most important decisions have been made by men, other perspectives and opinions were not taken into account, which led to a world designed by men for men. This is a classic example of how not having diverse teams reinforces inequality. The solution proposed by the author is to rethink the way we design things, to collect more data, study that data, and ask women what they need. “Invisible Women” should be read by everyone, especially those interested in creating policies.

“American Like Me: Reflections on Life Between Cultures” by America Ferrera

American Like Me” is a collection of very diverse stories centered around the life of  immigrant families in the US. As an international student myself, I can empathize with the feeling of being trapped between cultures and, occasionally, not knowing where I fit. Reading these stories left me feeling more empathetic and inspired from learning so much about other cultures and customs. I was really impressed by the variety of authors, who are actors, singers, activists, politicians, and more (see the book cover); plus, they all come from a variety of cultural backgrounds. The differences among the structure of the essays and the stories presented, reinforce the message about the value of diversity, and by extension, of immigration.

Other books that I would recommend, especially for students who want to improve their productivity and well-being: 
“Why We Sleep: Unlocking the Power of Sleep and Dreams” by Matthew Walker 
“Spark: The Revolutionary New Science of Exercise and the Brain” by John J. Ratey

David Fouhey was interviewed by Ralph Anzarouth on CVPR Daily. Permission to republish was exceptionally granted by RSIP Vision. June 2020

David Fouhey

David Fouhey is an Assistant Professor at the University of Michigan in the Computer Science and Engineering department.

RA: David, it is a pleasure to speak to you again. The last time you were featured in our magazine was in one of Angjoo Kanazawa’s photographs of the ‘Berkeley Crowd’. A lot has changed since then. What is your life like now?

DF: It’s a lot busier. I have many wonderful students now. For two of them, it’s their first CVPR, so I’m looking forward to hanging out at the posters with them virtually and answering all the questions. I’m also looking forward to sitting up and drinking coffee with one of my students. It’s important that we ensure everyone can come to the posters and see stuff and we can talk to everybody. Life has certainly changed a lot since the last time I featured, and it’s changed even more since the first time, which was four years ago!

RA: How many people are you in the lab now?

DF: There are currently eight graduate students and then I have a large number of undergraduates for the summer, which is exciting. Some of them are working remotely. There’s a lot of stuff going on, but it’s really wonderful to work with students.

RA: Did you always want to continue in academia and keep teaching?

DF: Yes, I have really enjoyed teaching, both in the classroom and getting students excited about computer vision and machine learning. I think it’s important not to hoard knowledge in your head. You have to get it out there. It’s really important for people to learn as much as possible and to teach people and welcome them into the field. Machine learning is very exciting now but there are lots of ways that it can go wrong. As people who have been around a while and seen that, I think it’s important for us to teach the next generation. We don’t want to keep on making the same mistakes.

RA: In what way is being an assistant professor different from what you expected?

DF: I have to do many more things than I realized! Lots of very different things. The topic can totally change from one meeting to the next. From talking about the next iteration of a course, to speaking about someone’s results. I switch around lot, which is exciting, because I see lots of new fun stuff.

Lab group image

RA: Do you find that you learn things from your students?

DF: What’s great is that students often have new and fresh ideas. What’s wonderful about computer vision is that we really, as a field, don’t know what’s going on most of the time. It’s very easy for someone to get started and to think of something totally new that you’ve never thought of before in that way. That’s why it’s wonderful to work with a collection of students from all sorts of different backgrounds. It keeps you on your toes and you get to learn all these new perspectives on things. It’s great. And they also help you keep up with reading arXiv!

RA: Do you ever feel overwhelmed by it all?

DF: Do you have moments where you think, “Get me out of here!” and want to be a software engineer in a start-up instead? I mean, definitely, in academia like in grad school you often do have these moments where nothing works and where your paper gets rejected, then your paper gets rejected again, and it’s really hard at times. Especially when you first start out. You go into this field where the default response is often no. I think it’s very important as a field that, especially as we’re growing, we treat people with respect and actively try to be inclusive of new people. It’s hard enough for my students when their papers get rejected, but at least they have someone who can say, “I’ll fix this,” but when people are just getting started and don’t have mentors floating around in their life, it can be tough. This is a problem that exists when a field grows really quickly, but in the long run the growth is really exciting.

RA: Now that you see the world through the eyes of a teacher, are there things that you see that really aren’t working, and you think we should fix to make the community work better? Funnily enough, last year, I interviewed Andrew Fitzgibbon from Microsoft and I asked him a similar question, and he told me: “Someday we’re going to have to figure out how to do these conferences without everybody travelling to the same place.” Last year, it sounded impossible, but what a difference a year makes!

David Fouhey Paris

DF: I really appreciate you asking this question. One of the things that has really changed since I started in computer vision is back then you looked for the example where your system worked and you were really excited, but it was a total fantasy. Like, “Maybe one day my system will work.” Now, we have systems that do stuff. One thing I try to teach, and I want to teach better, is that if you deploy these systems in the real world, if you’re not careful, they can have real consequences. There are all these stories that float in the community from ages ago about data bias. Like an entertaining story about a tank classifier that gets 100 per cent accuracy because it determines whether it’s taken at night or during the day with pictures of Soviet tanks at night and US tanks in the day. But now there are real serious issues where people deploy things. There’s this great paper from Joy Buolamwini and Timnit Gebru on Gender Shades and it has had real downstream impacts. It’s something that as a community we have to start thinking about because we know how a lot of these systems work and we need to make sure they’re not misused. We need to make sure that there aren’t bad outcomes and consequences. There’s an excitement about stuff working, but then this stuff can have really serious impacts and it’s important that as a community we talk about algorithmic bias and address it.

RA: Do you think the community will hear your call? How do you see things changing in this area?

DF: There are many other systemic issues and there’s a lot of reading that everyone can do. A lot of the issues that you spot in these articles are things that you’ll talk about, but for more simple things where you say, “If I trained a classifier to detect giraffes, maybe it only will pick up on some sort of other correlation.” I think it’s something where we talk about these things as academic examples, and it’s kind of interesting when it happens on MS COCO, but when it happens in the real world, we abstract away the concept that data and algorithms can have bias and forget about it. I think these are really hard problems and we have to find solutions. I don’t have solutions, but I think we have to talk about it and be aware of it and listen to people who have been talking about it for quite some time.

RA: Thinking back to the Berkeley Crowd, what do you miss the most from that time and those people?

DF: I miss ditching work and going off for a hike with my lab mates and taking long extended meals where you discuss anything and everything. Those are times that you should treasure in graduate school because you don’t get as many of them after.

RA: I think every one of our readers can relate to that.

Berkeley crowd

DF: One thing that I love about this community is that you see the same people and you’ve known them over many years. I met Angjoo at ECCV 2012. I was not part of the Berkeley Crowd for a while, but I knew them, I would see them at conferences, we’d hang out, we’d talk, we’d catch up. Now, they’re friends for life, and I’m sure in 20 or 30 years from now we’re still going to be in contact. You make these amazing friends over this really long period of time. It’s great. When you start going to CVPR, you don’t expect it. Then you go again and again and again.

RA: That is a really nice message for people attending their first CVPR. Everyone can build their own Berkeley Crowd.

DF: Yes, they’re friends you don’t realize you have yet.

RA: Do you have a funny story from those days that you could share with our readers?

Hiking

DF: I remember when Alyosha Efros would take us on a hike, he’d say, “It’ll be an hour,” and it’d always be like four hours! We would do things like there was a miniature train that he would take us on, and somehow, we’d always end up eating gelato. He had this uncanny ability to find gelato! These hikes would be outrageously long, and his estimates would be wildly inaccurate, but they were so much fun. I’d come home totally sunburnt but very happy! My message to people is make sure you take the time to do stuff like this because it’s really important.

RA: By having a career in academia, is that your way of not abandoning that world completely?

DF: Yes, I get to talk to people about all sorts of research problems all the time. I can work on all sorts of things. I‘m in heaven! I’m trying to do lots of different projects at the same time and it’s so much fun getting to have that experience with my students. An advisor-advisee relationship is not the same as you and your office mate, but there are similarities. You sit in the office and say, “What problems should we be solving?” Or, “Did you see this new thing on YouTube? How can we use that for computer vision?” It’s wonderful.

RA: Computer vision technology is evolving so fast. Where do you see things going next?

DF: People are really interested in 3D now, which is great. I got interested in 3D when it really didn’t work. Some of my old results are just horribly embarrassingly bad! It’s exciting. Because of deep nets now there’s stuff that you just couldn’t imagine. Justin Johnson is also at Michigan and he’s interested in 3D, so we have two students who we co-advise and it’s a lot of fun.

by Yiwei Yang 

This article contains a description of an AI project which was awarded the “Best Poster Award” by the public at the University of Michigan AI Symposium 2019.

Machine learning techniques, especially deep learning, have been widely applied to solve a variety of problems ranging from classifying email spam to detecting toxic language. However, deep learning often requires a massive amount of labeled training data, which is very costly, and sometimes may be infeasible to obtain. In low-resource settings (e.g. when labeled data is scarce, or when training data only represents a subclass of the testing data), machine learning models tend to not generalize well. For example, reviewing legal contracts is a tedious task for lawyers. To help facilitate the process, machine learning methods can be used to extract documents relevant to certain clauses. However, the company (e.g. IBM) that produces the model can only get a large number of their own contracts, whereas contracts of others (e.g. Google, Apple) is hard to obtain, causing the model trained to overfit and not generalize well on contracts pertinent to other companies.

On the other hand, human experts are able to extrapolate rules from data. Using their domain knowledge, they can create rules that generalize to real world data. For example, while ML models may see a pattern of sentences with past tense are correlated with the clause of communication, and thus use past tense as a core feature for classification, a human would easily recognize that the verbs of the sentences are true reasons why the sentences should be classified this way, thereby creating a rule of “if sentence has verb X, then sentence is related to communication.” However, coming up with rules is very difficult. Human experts often need to manually explore massive size of datasets, which can take up to months.

Our goal is to combine human and machine intelligence to create models that generalize to real world data, even when training data is lacking. The core idea is to first apply an existing deep learning technique to learn first-order-logic rules, and then leverage domain experts to select a trusted set of rules that generalize. By applying the rule learning method this way, the rules serve as an intermediate layer that bridge the explainability gap between humans and the neural network. Such a human-machine collaboration makes use of the machine’s ability to mine potentially interesting patterns from large scale datasets, and the human’s ability to recognize patterns that generalize. We present the learned rules in HEIDL (Human-in-the-loop linguistic Expressions with Deep Learning), an system that facilitates the exploration of rules and integration of domain knowledge.

Figure 1: Overview of our human-machine collaboration approach

The learned rules and the features of HEIDL are illustrated below.
What do the rules look like?

Each rule is a conjunction of predicates. Each predicate is a shallow semantic representation of each sentence in the training data, generated by NLP techniques such as semantic role labeling and syntactic parsing. It captures “who is doing what to whom, when, where, and how” described in a sentence. For example, a predicate can be tense is future, or verb X is in dictionary Y. So a rule can simply be tense is future and verb X is in dictionary Y. Each rule can be viewed as a binary classifier. A sentence is classified as true for a label if it satisfies all predicates of the rule.

What are the features of HEIDL?
HEIDL allows expert users to rank, filter rules by precision, recall, f1, and predicates
– After evaluating a rule, users can approve or disapprove it (the final goal is to approve a set of rules that align with the users’ domain knowledge. A sentence is true for a label if it satisfies any rule in the set.)
– The combined performance (precision, recall, F1 score) of all approved rules is updated each time a rule gets approved, helping users to keep track of overall progress
– Users can see the effect on overall performance by hovering over a rule
– Users can modify rules by adding or dropping predicates, and examine the effects

Figure 2: HEIDL: the user interface that allows experts to quickly explore, evaluate, and select rules

We evaluated the effectiveness of the hybrid approach on the task of classifying legal documents to various clauses (E.g. communication, termination). We recruited 4 NLP engineers as domain experts. The training data is sentences extracted from IBM-procurement contracts, and the testing data is sentences extracted from non IBM-procurement contracts. We compared this approach to a state-of-the-art machine learning model – a bi-directional LSTM trained on top of GloVe embedding, and demonstrated that the co-created rule-based model in HEIDL outperformed the bi-LSTM model.

Our work suggests that the instillation of human knowledge into machine learning models can help improve the overall performance. Further, this work exemplifies how humans and machines can collaborate to augment each other and solve problems that cannot be solved by either alone.

The full paper can be found here: https://arxiv.org/abs/1907.11184

The author of the paper is a student researcher at the University of Michigan. This work was done in collaboration with IBM Research as a summer internship project.




by Michigan AI’s Prof. Benjamin Kuipers & Prof. Rada Mihalcea


Prof. Benjamin Kuipers recommends:

I am currently reading a sequence of three books by Michael Tomasello that I think say something important about the contribution of different kinds of cooperation and collaboration to human success, individually and as a species. The books are:
A Natural History of Human Thinking (2014)
A Natural History of Human Morality (2016)
Becoming Human: A Theory of Ontogeny (2019)

All of these focus on the evolution of cooperation, which he argues is responsible for the dominance of the human species on our planet. He uses experimental evidence about the cognitive capabilities of great apes as a proxy for the capabilities of the last common ancestor shared by humans and great apes, about six million years ago. He observes that great apes are capable of sophisticated knowledge about physical causality including tool use, and even knowledge of intentionality: that is, the beliefs, goals, and plans that they and other individuals may have in a given situation. However, this knowledge is “individual intentionality”, used for individual competitive advantage in pursuing the agent’s own goals.

He argues that about 400,000 years ago, early humans began to evolve the capabilities for “joint intentionality”, the ability to pursue shared goals with another agent. This has a number of implications, including cognitive development of the ability to infer how the partner sees the world, the ability to communicate information that the partner needs, and the need for the partner to trust and believe what the agent is attempting to communicate.
With the emergence of modern humans about 150,000 years ago, this progresses to “collective intentionality”, involving shared goals with a larger population of other agents, leading to the development of a shared culture of beliefs, goals, and norms. As this culture develops, individuals acquire its structure, not as learned knowledge about the beliefs of other individual agents, but learned from infancy and childhood as “the way things are”. Norms progress from ways to maintain a collaboration with another specific individual to “how things should be done” to participate successfully in the society.

This evolutionary picture has implications for the nature of human thinking, human morality, and human child development (“ontogeny”).

These books by Michael Tomasello fill in some important gaps in my understanding of how ethics contributes to the survival and thriving of human society, discussed in:
Non-Zero: The Logic of Human Destiny, by Robert Wright (2000)
The Better Angels of Our Nature: Why Violence Has Declined, by Steven Pinker (2011)
Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, by Steven Pinker (2018)



Prof. Rada Mihalcea recommends:


If I were to pick only three books from among the science books I read over the past year, I would choose the ones that I quoted the most in my conversations with others.
The books are:

Life 3.0 – Being Human in the Age of Artificial Intelligence, by Max Tegmark (2017)
As I was reading this book, I first liked it, then disliked it, then liked it again, and ended up loving it. I am not even sure if I love the book itself, or the thoughts that it provoked, but it probably doesn’t matter. The book is very dense in interesting aspects of life in the presence of an advanced AI. The ideas that I found the most intriguing: (i) It’s not a matter of if, but a matter of when will our planet end. Technology is our only hope to transcend our own condition. (ii) The right question to ask is not “how will the future of AI look like” but “how would we like the future of AI to look like.” (iii) Consciousness (defined as the sensing of experiences) can happen with small entities (e.g., humans), but it’s much harder with large entities (e.g., the universe).
Teaser: The book also includes an AI-based scheme to get rich using Amazon Mechanical Turk (don’t put it in practice!)

Factfulness – Ten Reasons We’re Wrong About the World – and Why Things Are Better Than You Think, by Hans Rosling, Ola Rosling, Anna Rosling (2018)
Let me start by admitting that I failed miserably the survey from the beginning of this book, which asks questions about the state of the world: “What is the life expectancy of the world today?” Or “How many people in the world have some access to electricity?” As it turns out, most people will fail this survey: according to the Roslings, our view of the world is largely outdated, as the statistics that we generally use as reference correspond to the state of the world in 1960. And yes, almost 60 years have passed since! The book is a wonderful account of the current state of the world – as seen by a physician, a statistician, and a designer – and also a positive message that the world is in a much better state than we often think it is.
Bonus:
Anna Rosling’s Ted Talk
Dollar Street: an interactive website to see how people really live 

What If?: Serious Scientific Answers to Absurd Hypothetical Questions, by Randall Munroe (2014)
“Delightful” is probably the right word for this book; or rather, “scientifically delightful.” This book is a collection (that you can read in any order!) of questions asked on the popular XKCD website, along with solid scientific answers. The absurdity of many of the questions make the book amusing – e.g., “How much computing power could we achieve if the entire world population will start doing calculations?” (Just imagine the whole world stopping from what they are doing to do instead calculations! Turns out even back in 1994 a desktop computer exceeded the combined computing power of the humanity), or “When, if ever, will the bandwidth of the Internet surpass that of FedEx?” (Believe it or not – FedEx throughput is currently a hundred times that of the Internet). At the same time, the clearly explained science in the answers make the book a rich learning resource covering a wide variety of disciplines – biology, geology, computing, and more.

Bonus for Ann Arbor locals: Randall Munroe will be in town on September 6, hosted by Literati

Want even more recommendations? The other two contenders for my top science books over the past year were:
The Immortal Life of Henrietta Lacks, by Rebecca Skloot (2010)
Why We Sleep: The New Science of Sleep and Dreams, by Matthew Walker (2017)