First of all, fuck.

I have not been able to write much these past few weeks for two main reasons: 1) school is hard; and 2) not having much to say about my projects. However, I came back from Thanksgiving break forgetting that I have a Design Meeting on Wednesday. I couldn’t remember what I had to prepare for the class, so I emailed Malte Jung (my instructor and robot expert) if we were supposed to prepare anything. He said that we were meant to present three to five working prototypes for the class to review.

Well, first of all, fuck. Per usual, I missed a very important detail from class. AND I only have one idea.

But okay, breathe deep and dream and think.


My advisor really liked this idea of using crowd-sourced knowledge to support communities of GBV survivors. And, when you think about that as a starting point, there are many ways to design tools that can help.

Since the last post, I’ve been exploring limitations and tweaks to my original idea, a Google Chrome extension that can enable people to highlight problematic phrases/language that perpetuates gender-based violence in articles. I’ve had really great conversations about it with friends and colleagues. One of the biggest issues I’ve heard and definitely agree with is that language can be difficult to parse. For example, let’s say that you see Phrase A and highlight it as problematic. That might not be what Person B sees – they might think that the whole sentence is problematic. I guess, this is an issue of how people read through a statement or article and try to evaluate it without any criteria or focused questions. So perhaps, there is a different way to approach this. One thing I would like to try is to observe people doing this as an exercise, without any specific instruction. Perhaps that is a good way to tackle getting a handle on this issue.

Main things to think about: language when it comes to GBV.

One of the new ideas that I have (while doing the dishes) is an apology discernment website – what if there was a website that could either parse apology statements or show people if an apology is sufficient. Example: People’s responses to these apologies have been…rather troubling. For example, after the Louis C.K. apology statement was released, there were male journalists on my Twitter TL legit saying “WOW, WHAT AN APOLOGY”.

But if I may “well actually” here. It wasn’t really an apology. Especially if you look at the statement within the context of originating from a sexual predator. So, what if there was a website that was set up kind of like an informative quiz and asked you certain questions about an apology statement to help discern if it was actually an apology before you Tweet/make statements about it. First, I would have to design the quiz. I think it would be one question visualized on the screen at a time, and you would have to click an answer to go to the next question. But what would the evaluation look like? Are we talking about a score here? Are we talking about categories, where, for example, we can say that the accused does not display a workable understanding of consent in their statement, and provide an explanation on why that’s important? There are so many ways to conceptualize this website – I wonder if this is my most do-able idea.

*When I say doable, I mean simply that I can accomplish making it – not necessarily that the product will have impacts that I want.

If I could, I would make it more like a game. However, I need to think about who I am making this for, again. If I’m making it to support communities of sexual trauma victims, then making it a game with humorous aspects probably isn’t the best idea. I use humor to cope a lot, but I have to recognize that it is not the same for others. However, if I thought about this as an education tool…

Main things to think about: what does a good apology look like when it comes to trauma?

My last idea is not even mine. It came from a dinner with housemates, where I brought up my project and then we went into an impromptu discussion about it. One thing someone brought up that I haven’t gotten my mind off of was – what if there was a crowdsourced flag system with social media? Like, in your Twitter timeline, there’s a tweet that defends accused sexual predators and you don’t want to see it. You could enable a plug-in that could alter your feed so, as my housemate said, puts a colorful post-it on top. So, let’s say Person A tweets something that is very triggering for sexual violence survivors – Person B can flag that tweet to have a Post-It on top of it. (This is also another interesting discussion of what constitutes as flag-worthy. Twitter is notorious for refusing to censor abuse or harassment. More thoughts on this later.)

Altering tweets is …weird, and I’m not sure how to interact with the platform so I can do that. There is this plug-in, Make Trump Eight Again, that visually changes Trump’s tweets so it looks like it was written by a child in crayon.

Screen Shot 2017-11-28 at 5.54.20 PM

I’m looking for some form of code/GitHub something, so I can see how they did that. Not seeing anything yet. As much as I like to stay away from Trump’s Twitter, I’m going to download it and see how it works.

There’s also this concept of filtering (or censoring in this case) social media feeds. Ethan Zuckerman recently announced a new tool called Gobo that can help filter social media. And, what I think is most interesting is that it brings the idea of control back into frame.

Main things to think about: Idk if I can actually make this.

All in all, I have to build three prototypes overnight so wish me luck slash pray for me, thanks.

giphy (2)

Design Project: “I believe you”

One of the biggest struggles I’ve had with this blog (with the whole month I’ve had it) is deciding what to write about. This PhD program is exactly what I wanted – everyone in my program is doing such interesting and innovative research. BUT now the problem is that I’m interested in….everything.

So, now I had no idea what I was going to write in this weekly post. Here were some of things I was considering:

  1. Agriculture and technology – how does this relate to land rights? Climate change? Economic rights?
  2. Machine learning technology and dating – I’ve been speaking about this nonstop. I’m trying to develop a research proposal with Fernando Delgado, another first-year in my program who is also advised by Karen Levy, about how these technologies can possibly exacerbate inequalities.
  3. Cybersecurity equality – I was lucky enough to hear Prof. Fred Schneider of the Cornell CompSci Department speak about his joint research with Prof. Deirdre Mulligan (Dept. Information at Berkeley) in cybersecurity. They detailed a series of doctrines that calls on cybersecurity being seen as a public good and implemented as such. So, I’ve been thinking a lot on what that means, since public good are rarely ever implemented and given equally.
  4. My current projects at school – the best part of a PhD program so far, is that they teach you methods and principles, and then let you free to apply them to whatever you want. So, I have two big projects: one due by the end of this semester, and the other due at the end of next year. I’m going to talk about these for this post.


For my Quantitative Research Methods course, I basically have to apply Python to any research question I have. Which sounds basic but is SO HARD.

What I wanted to explore was is inattention and Somalia. When last month’s attack on Mogadishu occurred, I was in a complete panic. Somalia is struggling and clawing its way out of decades of civil war and destruction, with no global help besides Turkey’s efforts. And, when a terrorist attack that kills over 300 people occurs and the international community gives barely a peep, it is beyond heart-breaking.

The people killed were people who were beginning to see their city, their country, their lives changing. And, to see so many people’s lives wiped out in an instant, without acknowledgement, without care, without attention – it was just horrifying.

When I speak about attention, I’m talking about systems. What factors made this attack barely acknowledged by global and Western media? There are some that are easy to pick out, here are a few:

  • Black
  • Muslim
  • Poor

Kk cool. But, I believe there are more factors that come in. As I was discussing with my professor, Paul Ginsparg (who is a MacArthur Genius and created – so lit), I want to look at how Somalia is framed by media sources by doing a word-pair correlation analysis (PLEASE NOTE THAT I’M STILL FIGURING PYTHON OUT). So, how often does the New York Times frame Somalia within terrorism, failed statehood, etc? Does this differ between news sources? How much attention does each news source give per attack in Somalia?

So, in order to do this, I have to scrape several news sources and create a list of words to examine. More to come.

Design Project – ‘I believe you’

As I mentioned in my post a couple weeks ago, I have to complete a design project requirement for my first year. I’m hoping to design a plug-in that will crowd-source highlighted statements that can notify the reader if an article/essay/etc reports/describes gender-based violence in a problematic way.

I’ve gotten great feedback on my original entry and friends and colleagues have even forwarded these posts to other people – and I’ve gotten great advice and feedback from them! Such a blog win.

With this news storm of sexual harassment and assault in Hollywood (and today’s big drop on Roy Moore), it feels like validation of survivors is more important than ever.

I wanted to explore this idea of believing survivors (I need to find more literature on this next part, in case anyone has any suggestions). Because we live in such a rape culture and a misogynistic one, to feel validated as a survivor is a moment-by-moment fight. I am in no way exaggerating. To feel like it is not your fault that you were attacked, humiliated, or derided for what happened to you is a constant confrontation with the world around you. The conversation is really heating up about men who are predators, and we are really only just beginning to examine the women who enable them.

What I am trying to say is that it is a societal problem. We are so conditioned to believe that victims, who are often people without power, did something to deserve what happened to them – and we are so conditioned to believe that predators, who are often men in power, did not commit the crime. And, I also want to point out that, even if people believe that a crime has been committed, gender-based violence and sexual violence are so taboo that a lot of people don’t want to confront the reality of how horrific and prevalent these crimes are.

Which is why I want to focus my design project on validating survivors. It is a day-to-day, moment-to-moment fight to believe that what happened to you was wrong. You have to confront reporting, debates, and discussions on crimes that are exactly what happened to you, and you have to hear people argue about whether or not it should be taken seriously. You have to confront people who perhaps only 40% believe you. Or you have to deal with friends and family who are unwilling to face the truth of what happened to you and just plain don’t want to talk about it. And these debates, these struggles, these arguments are mirrored in media, in entertainment, at the dinner table.

This plug-in (while very likely not be created with the best of coding knowledge) is created with these survivors in mind. If you read an article that notes that the victim so happened to be wearing WHATEVER, you can activate the plug-in and see that 40 other people noted that that statement was fucked up.

It’s not meant to be a perfect solution. I just have to say that having someone look you in the eyes and say “I believe you. 100%. I believe you” is a very very powerful thing.

Thanks for getting through a week’s worth of blabbering. More details to come on the design project – very likely about the actual design. If you have any questions, suggestions, CODING HELP, you can reach me at idilaali [AT] gmail.

Machine Learning and Intimacy: A Rick & Morty Adventure

I’ve started my “deep dive” into artificial intelligence/machine learning these past few weeks, and one thing that really caught my eye was this article about how machine learning can be used in dating apps.

*This post isn’t meant to be formal academic thought, but simply an exploration into a tech phenomenon*


With machine learning capabilities, dating apps *could* make dating a lot easier – you could weed through the ultra-misogynists, the white supremacists, and people who you just find boring. Think of the time you’ll save! The effort! The unpleasantness avoided! Let the computer do the work, and you have your soulmate, badabing badaboom.

Yet, this is all at the expense of your privacy. As the article describes, these dating apps would get all of your information via your social media activity, which means your Twitter RTs, your Facebook likes, etc are all open to being stored, analyzed, and categorized by algorithms. But here’s the part that is so interesting to me: that these algorithms will have the power to REALLY interpret social media activity. Like, let’s talk memes. Let’s bring in my favorite Rick & Morty meme.

Screen Shot 2017-10-30 at 5.14.22 PM.png

I do have to say – I use any and every reason to bring up this meme.

Well, this is a hilarious meme. I think it’s funny, you think it’s funny. What if we find it funny for different reasons? I think this meme is funny, because Rick, a super genius scientist who needs to be always right (omg Rick is my father) is finally wrong about something!! You think this meme is funny because of crazy astrophysics theorems that I will never understand because I am woman. So, we meet up for burgers, and we absolutely hate each other. I relay feedback into the dating app that I find you pretentious, not fun, and boring. You put in feedback that I’m dumb woman who shouldn’t be watching Rick & Morty because I not physics.

Cool, so what this machine learning algorithm will do is take that feedback, and change how it sees the incoming data. And it will learn that there are different ways of interpreting the meme that can be dependent on several factors. So, the next time they see that a person likes this meme, they will look at other factors that can determine if this person finds this meme the kind of funny that you think is funny. Cool. Super interesting and amazing computing.

Now that we’re past the marvels of this technology, let’s talk about the real problems. In order to be given your soulmate by these future apps, you are handing over so much privacy (and the argument goes, well how much do we even have left??).

Ok, rewind. Why would you, a consumer, use a dating app that utilizes machine learning algorithms? You would very likely use it because it makes dating more efficient by reducing the amount of time you have to use the dating app by increasing the chance of finding a good option for romantic life. However, in order to accomplish this, the level of data that these apps will collect from you will be astounding. I spoke briefly with my BFF GRACE about this level of data collection, and she mentioned that she doesn’t interact much with social media to begin with. This is a super good point, and one that technology companies are already combating with as many design innovations as possible. But there are other more passive ways of finding out what you are interested in – for example, Facebook can track your online activity even if you’re already logged out. Your data is precious precious gold to these companies, and they will figure out any way to get it. Think of these companies as Gollum:

Screen Shot 2017-10-30 at 5.30.08 PM.png

And we are all Precious.

There are ways to get your online data activity that doesn’t involve actively posting a status or Tweeting. You can Like Facebook posts and Tweets. Your cursor can even just linger on someone’s post and that’s a data point. There are so many ways your personality profile can be constructed.


Privacy aside, what could an app like this mean for the human experience? Like yes, I think dating is actual trash. Yet, I think it is, in a way, an important experience to meet people and say “I do not like this person, because of these reasons, ___”.

This honestly reminds me of the movie TimerThis film introduces this idea of a technology that, once implanted, will count down to the moment you meet your soulmate. And, what this film does is explore the other ways we love that do not fit the idea of ‘soulmates’. For example, an individual in this film finds out that she will not meet her soulmate until she’s 42. So, in our world, what would likely happen is she would embark on a series of romantic relationships until she meets that person. However, in the film, she seeks out one-night stand after one-night stand, because the technology has in a way invalidated her right to have emotionally intimate relationships before she’s 42. Super interesting movie, but I got sidetracked.

Sometimes, we need to be with the wrong people in order to learn more about ourselves. Yes, this technology has the ability to make attaining love more efficient, yet it also seems like it would make us only be with people who are the most compatible to our values and personality. Instead of an information bubble, are we coming to the age of the….LOVE BUBBLE?!??!

In addition, I think that there is fault in the idea that the personality profile these ML algorithms can build is actually who we are. For example, a man who attended the Women’s March and Likes/RTs a bunch of mainstream feminist things may not *actually* be a feminist. There is a factor of social media performance that I think will be very difficult to detect. Because, even as men post their Women’s March visuals, I do think they often believe that they are feminists, but do they actually fully exercise the belief that women deserve the same rights of men? Like maybe, not interrupting women? Or maybe perhaps, being able to handle their shit if a woman says they’re wrong?

I think what I am trying to explain is that, perhaps what we present on social media is a personality and value performance that often does not translate into the real world, because the real world is more difficult than just a post or even a series of social media data points.

Lastly, after speaking with Fernando Delgado, from my cohort at Cornell, he questioned the ability for these algorithms to really predict chemistry.  I think this post has gotten too long, so I’ll end with this dope slogan we came up with: You can’t compute chemistry.

Design Project: October 2017

One of the requirements for my first year is a Design Project. This project requires me to employ theories of design and create a prototype/program/something/anything. At first, I wanted to explore the tensions of measuring a philanthropic program holistically and accurately. However, I think I’m moving to a project that focuses on gender-based violence (GBV) and technology.

I met with Prof. Phoebe Sengers last week, who is crazy brilliant and amazing. I spoke with her about my research interests and brought up the Design Project. I mentioned that, while I think accurate measurement is important, domestic violence and gender-based violence really get my blood boiling. So, I asked, how can technology be used to disrupt frameworks of belief? What are ways that you can disrupt an information bubble that is harmful to moving GBV work forward? The actual answers to that are complicated. HOWEVER, there are ways I can look at aspects of this question in the design project.

Prof. Sengers suggested I take a speculative design approach to my design project. As in, churn out thirty ideas with no limitations or restrictions. Just get it on paper. For the brevity of this post, I’ll focus on one of these speculative design ideas.

Let’s say you, a media consumer, are reading a news article about a sexual assault/domestic violence incident/etc. You see a phrase that is problematic, such as the writer mentions what the victim was wearing or wonders why the victim choose to stay with their abuser. Basically, statements that mirror rape culture. Instead of writing in the comments section, you instead use a plug-in to highlight the offending phrase.


Ok, now that you’ve highlighted the phrase, what if other people also did the same? What if 40 other people saw the same problem you saw? And what if, when you right-clicked the plug-in, you had the option to see the most-highlighted phrases on an article dealing with allegations against, let’s say, Woody Allen in the NYT.

When you add these highlighted phrases, you change the way this article is seen. This can change the power structure of an article from authoritative to prone to societal biases.


An example of what highlighting from the plug-in could look like, from Medium:

"Good Men" and Harvey Weinstein

Highlighted statement from Matt McGorry’s essay on Medium



Today, I met with my advisor, Prof. Karen Levy. We talked about this idea, and she brought up some really great questions. I had mentioned to her, that there could be a feedback loop that alerts journalists on how communities are perceiving their articles about GBV. So, for example, let’s say a journalist writes something about workplace harassment and gets a lot of clicks for it. However, this plug-in can show that there were 100 people that had issues with how they wrote about the alleged abuser.

Karen then brought up the important question of, who is this plug-in for? Is it to direct journalists on how to report on GBV fairly and without bias? I don’t think I would create this for journalists. I think I am looking for community empowerment. Perhaps what this plug-in can show is that: 1) you are not alone in thinking that reporting on GBV is consistently problematic, and 2) you can, in a way, change the way the content on GBV is portrayed. I think of this as a community validation tool, as well as a way to visually change power structures.

This is my design project so far. Now, I need to explore issues/assumptions as ALL ACADEMICS DO.


  1. That the audience might not be similarly knowledgeable, that the highlighted statements might not be congruent
  2. That parts of the audience might not have good intentions, i.e. intentionally highlighting nonsensical statements
  3. More to come.


First two months: predoctoral thoughts

**Please note that these are MUSINGS and not actual research findings

I think about numbers a lot. And what they mean. Collected numbers are data. Sets of data that are too large for a human to process are big data. And machine learning is when a computer learns from the results of analyzing datasets, and can automatically improve its analytical model based on its findings. These are a lot of new technologies, and I wonder how they will impact the most vulnerable people. And how they can be improved upon to be as inclusive and fair as possible.

So, what does this actually mean? In my two months as a student, I have learned these lessons:

  1. Computers aren’t people.
  2. Fairness is complicated.

When I say computers aren’t people, I mean that, computers cannot employ logic without being programmed to do so, and if the result is questionable, cannot explain themselves like a human can. However, a computer can be programmed with the same level of biases as a human does. Which means that a machine can conclude with the same prejudiced decision as a human does, but if you inquire into how it got to that decision, the machine may not be able to tell you.

So how on earth do you program fairness into machine learning? Welp, there seems to be a whole body of work/inquiry that I have yet to read. Stay tuned?


When I think about where I want to make an impact, I always think about Somalia. Somalia has almost no data recorded in the World Bank, esp when you compare it to other countries. So, there is one aspect to data collection that incorporates inaccurate data at a large scale, but what do you do with a country that has no data collected?

I’m also pondering the erasure of Somalia. With the recent terrorist attack on Mogadishu, the lack of global attention and care was astonishing. I’m considering a project for my Quantitative Methods course. to analyze how we frame Somalia through media. What words are used to frame Somalia? Can this framing develop into bias and inattention through donation and fundraising?

I think I’m realizing that machine learning is part of the future of international development. So, moving forward, I’m thinking about:

  1. How does technology reinforce/amplify existing power structures in international development/philanthropy?
    1. What does it mean to disrupt an existing power structure?
    2. How does technology as a tool enable said disruption?
  2. How does technology disrupt existing power structures in cases of gender-based violence?