Meta, the parent company of Instagram, has apologized for what it claims was a technology glitch that caused the word “terrorist” to be added to the biographies of Palestinian users. According to the BBC, a 2022 external investigation had concluded that “Meta’s actions appear to have had an adverse human rights impact on… the ability of Palestinians to share information and insights about their experiences as they occurred.”

Besides Meta having a suspicious track record regarding Palestinians specifically, this raises a lot of questions about the role of tech companies, AI, and algorithms in shaping what information gets shared, what perspectives find audiences, and who gets labeled things like “terrorist.”

(Taken from an email sent to me by Never Again Action.)