This year, United Nations officials will meet for what could be a historic discussion on whether or not to issue a blanket ban on killer robots — but what will this international group of experts talk about? Big concepts most likely. Led by officials from the ever-energetic Campaign to Stop Killer Robots, the discussion will likely bounce from autonomy to culpability, to what constitutes a war crime. What won’t the international group of experts talk about? The robots already killing people. According to armed conflict expert Chris Jenks, this is because advocates are arguing that we need to avoid the creation of deadly autonomous systems, even though they already exist. The cat is out of the bag, and it’s a robot tiger.

Before moving to academia, Jenks spent years researching drones and related issues for the Pentagon, which is why his view of killer robots is firmly rooted in the technology of the present day. He argues that there are already pieces of technology, notably drones, that fulfill the campaign’s definition of a lethal autonomous weapons system, and that it’s simply unrealistic to push for a ban that would require states not only to forego developing new weapons, but to retire ones that already exist.

“The start point for much of the campaign was really post-drone angst,” says Jenks, who teaches at Southern Methodist University. “A number of the groups within the campaign were unpleasantly surprised at the outcome of the drone discussion.”

The crux of the issue is the working definition for LAWS, which is broad enough to include any system that can “select and engage targets without human intervention.” Jenks believes that automated defensive systems, including the one for U.S. Patriot missiles, fit this definition. A Patriot system shot down a Royal Air Force fighter over Iraq back in 2003, killing two pilots. The incident was ultimately determined to have been the result of a glitch: The system misidentified the jet as a missile.

Mary Wareham, the global coordinator for the Campaign to Stop Killer Robots, takes issue with this line of reasoning, saying that Patriot systems are “absolutely not” within the scope of their proposed ban. She says that the states she works with will help tweak definitions to create understandings that might produce a workable, practical deal.

“This is a preemptive ban,” she stressed over the phone. “We’re flattered by the attention, but the governments are in charge here…. They decide the definitions of these weapons, and they decide whether to ban them. We’re not pulling the strings from behind the scenes.”

For a sense of what that means in a diplomatic context, it’s helpful to look toward Russia, which recently objected to the proceedings on the basis that they “will be beneficial only when we have developed a clear understanding of the subject under discussion.” Representatives have since agreed to let the talks go forward.

Jody Williams, a Nobel Peace Laureate, and Professor Noel Sharkey, the chair of the International Committee for Robot Arms Control, pose with a robot at a protest.

“We actually made a breakthrough this December,” Wareham said, referring to the recent decision by 89 nations to begin formally creating regulations for lethal autonomous weapons. “We’re not satisfied that they’re moving fast enough, but they’re moving in the right direction.”

Of these 89 countries, 19 have now said they want the same blanket ban as the campaign — the one built on definitions that are likely to be tweaked. From the campaign’s perspective, this means a seat at the table and the opportunity to combat state-sponsored apathy and opposition. From Jenks’s perspective, this represents little more than sound and fury.

To Jenks, though, any seeming potential could be a mirage, allowing for what he calls “willful confusion” on the part of states that have more interest in being seen to be reaching an agreement than they do in actually reaching one. Jenks believe that more focused definition for LAWS might make a ban more appealing or — on the flip side — difficult for countries to avoid signing without stating clear intent. But he’s also realistic about the state of international relations and different emerging cultural norms around robots.

A recent poll by Ipsos found that a majority of the people in all but five of 23 countries surveyed opposed killer autonomous technology. The wrinkle was that one of those five countries was China. The U.S. military has already expressed openness to a future without autonomous weapons, but an openness to falling behind in an arms race? Not so much.

But none of that diminishes Wareham’s optimism. She says the Campaign to Stop Killer Robots will continue to push for a preemptive blanket ban. She explains that partial bans can be used as a distraction.

“They can end up being rabbit holes you go down and you never come out of again,” she says.

On this, she and Jenks agree.

Photos via Getty Images / Oli Scarff, Getty Images / Jordan Pix

Apple confirmed on Thursday that it will host a fourth-and-final event of 2018 later this month, where it is expected to debut a range of new hardware ranging from iPad Pros, Macs, Macbooks, and the long-awaited AirPower. Within minutes, social media was flooded with screenshots of the invitations, which had varying takes on the Apple logo.

With invites to a rumored October product release going out possibly this week, leaks and renders of would-be Apple products have been leaking in droves. Most of them have pertained to the soon-to-be overhauled iPad Pro, making it the frontrunner for a device update before the end of the year.

Update, October 18: Apple announces October 30, 2018 as date for event. Read more and get the details.

Look, I love the Apple Watch. It’s big, beautiful, and it just keeps getting better, but with computers speeding up every year and Apple’s competitors getting savvier, there are fewer reasons to stick so tightly to Tim Cook’s lineup of gadgets. Android phones are great, Microsoft’s Surface continues to improve drastically, and, yes, smartwatches are getting better and better, too.

Last December, the world ushered in a new era of popular music: human and artificial intelligence (A.I.) collaboration.

Musical eras are often defined by their dominant modes of production — analog, electronic, digital — each bringing about new styles and ways of listening. This era is marked by the release of the first A.I.-human collaborated album, Hello World, by the music collaborative Skygge. Skygge, led by composer and producer Benoît Carré and musician and tech researcher François Pachet, translates to “shadow” in Danish and was inspired by the Hans Christian Andersen story of the same name.