I have not been able to write much these past few weeks for two main reasons: 1) school is hard; and 2) not having much to say about my projects. However, I came back from Thanksgiving break forgetting that I have a Design Meeting on Wednesday. I couldn’t remember what I had to prepare for the class, so I emailed Malte Jung (my instructor and robot expert) if we were supposed to prepare anything. He said that we were meant to present three to five working prototypes for the class to review.
Well, first of all, fuck. Per usual, I missed a very important detail from class. AND I only have one idea.
But okay, breathe deep and dream and think.
My advisor really liked this idea of using crowd-sourced knowledge to support communities of GBV survivors. And, when you think about that as a starting point, there are many ways to design tools that can help.
Since the last post, I’ve been exploring limitations and tweaks to my original idea, a Google Chrome extension that can enable people to highlight problematic phrases/language that perpetuates gender-based violence in articles. I’ve had really great conversations about it with friends and colleagues. One of the biggest issues I’ve heard and definitely agree with is that language can be difficult to parse. For example, let’s say that you see Phrase A and highlight it as problematic. That might not be what Person B sees – they might think that the whole sentence is problematic. I guess, this is an issue of how people read through a statement or article and try to evaluate it without any criteria or focused questions. So perhaps, there is a different way to approach this. One thing I would like to try is to observe people doing this as an exercise, without any specific instruction. Perhaps that is a good way to tackle getting a handle on this issue.
Main things to think about: language when it comes to GBV.
One of the new ideas that I have (while doing the dishes) is an apology discernment website – what if there was a website that could either parse apology statements or show people if an apology is sufficient. Example: People’s responses to these apologies have been…rather troubling. For example, after the Louis C.K. apology statement was released, there were male journalists on my Twitter TL legit saying “WOW, WHAT AN APOLOGY”.
But if I may “well actually” here. It wasn’t really an apology. Especially if you look at the statement within the context of originating from a sexual predator. So, what if there was a website that was set up kind of like an informative quiz and asked you certain questions about an apology statement to help discern if it was actually an apology before you Tweet/make statements about it. First, I would have to design the quiz. I think it would be one question visualized on the screen at a time, and you would have to click an answer to go to the next question. But what would the evaluation look like? Are we talking about a score here? Are we talking about categories, where, for example, we can say that the accused does not display a workable understanding of consent in their statement, and provide an explanation on why that’s important? There are so many ways to conceptualize this website – I wonder if this is my most do-able idea.
*When I say doable, I mean simply that I can accomplish making it – not necessarily that the product will have impacts that I want.
If I could, I would make it more like a game. However, I need to think about who I am making this for, again. If I’m making it to support communities of sexual trauma victims, then making it a game with humorous aspects probably isn’t the best idea. I use humor to cope a lot, but I have to recognize that it is not the same for others. However, if I thought about this as an education tool…..hm.
Main things to think about: what does a good apology look like when it comes to trauma?
My last idea is not even mine. It came from a dinner with housemates, where I brought up my project and then we went into an impromptu discussion about it. One thing someone brought up that I haven’t gotten my mind off of was – what if there was a crowdsourced flag system with social media? Like, in your Twitter timeline, there’s a tweet that defends accused sexual predators and you don’t want to see it. You could enable a plug-in that could alter your feed so, as my housemate said, puts a colorful post-it on top. So, let’s say Person A tweets something that is very triggering for sexual violence survivors – Person B can flag that tweet to have a Post-It on top of it. (This is also another interesting discussion of what constitutes as flag-worthy. Twitter is notorious for refusing to censor abuse or harassment. More thoughts on this later.)
Altering tweets is …weird, and I’m not sure how to interact with the platform so I can do that. There is this plug-in, Make Trump Eight Again, that visually changes Trump’s tweets so it looks like it was written by a child in crayon.
I’m looking for some form of code/GitHub something, so I can see how they did that. Not seeing anything yet. As much as I like to stay away from Trump’s Twitter, I’m going to download it and see how it works.
There’s also this concept of filtering (or censoring in this case) social media feeds. Ethan Zuckerman recently announced a new tool called Gobo that can help filter social media. And, what I think is most interesting is that it brings the idea of control back into frame.
Main things to think about: Idk if I can actually make this.
All in all, I have to build three prototypes overnight so wish me luck slash pray for me, thanks.