Culture

Remember Twitter's racist photo-cropping tool? It's still a thing.

The company first acknowledged its machine learning discrepancy back in October.

Late last September, Twitter users began noticing a pretty major flaw in the social media app's algorithmic photo auto-cropping feature: the program identifying human faces for thumbnail focal points was pretty damn racist, consistently centering images on white faces within multiracial subjects. The company issued an apology via its blog not long afterwards, and detailed the extensive testing it was working on to correct the unintended coding bias. So, how is Twitter's photo recognition AI doing two months later? Pretty damn badly, by the (cropped-out) look of things.

One of these things is not like the other — The two images posted above are comprised of stills from Georgia's recent U.S. Senate runoff debate between the Democratic candidate, Reverend Raphael Warnock, and incumbent Republican Senator/Trump sycophant, Kelly Loeffler. Despite swapping the two individuals' placement, Twitter's facial recognition software still decided to focus squarely on Loeffler, a white woman, each time. See: Exhibits A, B, and C

Twice the smarm.Twitter / @artordillos

Still racist, even if unintentional — While some argue this can be explained by Twitter's algorithm simply being trained to prioritize contrast, that doesn't mean it's not, y'know, still really damn racist. Allowing this to continue will still unfairly train attention towards lighter-skinned people in photos, which can be a problem when it comes to media coverage for, oh let's see here, a pivotal Georgia runoff election deciding who holds the Senate majority next year. Repeat after us: Racial bias, no matter how unintentional, is still racial bias.

Hard to believe it's anecdotal — Of course, this might be simply an anecdotal case of annoying image cropping errors, but given the repeatedly proven history of racial bias in algorithm crafting, we find that counterargument a bit hard to believe. Racist machine learning has already made it more difficult for Black patients to receive kidney transplants, and let's not forget that some companies even admit inherent problems within their AI products, even if they try to first hide the fact from their potential customers. Hopefully, Twitter will finally begin feeling the heat from this and implement real, measurable changes within its coding, although given the company's track record, we certainly aren't going to hold our breaths on this one. At least it's (finally) coming to realize that verifying literal Nazis is probably a bad look.