The $249 Google Clips isn't an action camera.
It isn't a life-logging camera, either.
It's something weirder, a showcase for Google's AI technology that's supposed to act as an artificially intelligent assistant that knows when to take interesting snapshots.
The idea behind this little camera is that, at family events, it liberates you from having to frame photos and frees you to enjoy the day, while it decides which photos to take.
That's an interesting idea, but in practice Clips takes unflattering, poorly framed videos and ends up creeping out everyone in the room.
I'm pretty heavily invested in Google products.
I have two G Suite accounts, an Android phone, a Google Home, a Google Play Music subscription, and hundreds of gigs of photos in Google Photos.
I used Clips for a week when taking my daughter to coffee, visiting my new baby niece, and at a comics convention.
These are all times that I'd like memorialized, but also times when it's more fun to be in the moment rather than being stuck behind a lens.
So the concept of Google Clips, completely divorced from the product, makes sense.
The problem is in the execution.
Cute, but Creepy
Google Clips is cute.
It's a 2-inch square, 0.8-inch deep camera that weighs 2.1 ounces and fits in a little rubbery cover that has a clip on it, thus the name.
On the front, there's a protruding fixed-focus lens that also functions as the on/off switch—you turn the lens—and a shutter button if you want to force it to take a clip rather than waiting for the AI.
Three LEDs tell you when the camera is on, working well, or low on storage.
Peel Clips out of its case and there's a USB-C port on the bottom for charging.
The device is not water resistant or rugged, but the case makes it a little bouncy.
You can attach Clips to things, or stand it up in its little case.
I found that because it basically always needs to be sitting on a table, its shooting angle is wrong.
On a dinner or coffee table, Clips videos are always taken from too low an angle to be flattering for faces; ditto for putting the camera on the floor when I was playing with a baby.
I tried putting it on other furniture, but usually got too high an angle, with lots of heads and ceilings.
You can't control the angle the camera is positioned at using the case.
Also, the case's base isn't particularly wide or sturdy, and the camera fell over a number of times in my testing.
The camera needs a case that sticks to a wall, or one with a tilting aspect.
Pretty much everyone I tried Clips with was creeped out by it.
My daughter didn't like it.
My sister-in-law didn't like it either.
After a while, I realized that the creepy factor isn't that it's always watching: It's that you don't know when it's watching.
The uncertainty is the problem.
With a camera on in your room all the time—say, a home security camera—you either perform to it, or ignore it.
Psychologically, you make one decision, once, about it.
But there's something unsettling about the uncertainty of Clips: Is it recording? Is it not? That keeps everyone on edge.
Clips doesn't violate people's privacy in the way you're normally used to Google doing.
It doesn't post images to the internet automatically—you have to affirmatively select and transfer each clip to put it online.
The AI face-recognition algorithms remain in the camera.
Clips data doesn't get added to Google's profile of you unless you upload your clips to Google Photos.
But still, the omnipresence and uncertainty made everyone in our room uneasy.
Google says Clips downloads data from your Google Photos library to help identify which people are most important to you.
For what it's worth, I didn't see that as having an effect on the clips the camera was recording.
Performance
Clips has a 12-megapixel sensor and a 130-degree, fish-eye lens.
It only captures silent 7-second videos at 15 frames per second.
The videos appear to be the same size when viewed in Google Photos, but when downloaded to my PC, I saw that they were all slightly different sizes: 1,920-by-1,410, 1,920-by-1,250, 1,920-by-1,218, 1,520-by-972, 1,472-by-962, and 1,440-by-962 files all appeared.
Those are even all slightly different aspect ratios!
You can switch between high and low quality, and make the camera snap videos at three levels of frequency.
High-quality clips are mostly in the 2 to 3MB range, and low-quality clips are 1 to 2MB.
Low-quality clips look less defined, but maintain the same frame rate.
With the ability to store about 5,000 clips in the camera's 16GB memory, but only 2.5 to 3 hours of battery life (less than a fun afternoon with the baby), there's no reason not to use the high-quality, high-frequency setting: You'll need to stop and recharge well before you fill the camera up.
As the camera collects clips, you can thumb through them in the Clips app on your phone.
The app is also the only way to get clips off the camera, so you need it.
It supports Pixel family, Galaxy S7 or later, or iPhone 6 or later devices.
There's no way to set up or connect to Clips from a PC or tablet.
The app torched my Galaxy S8's battery to the tune of about one percent per minute, so I tried not to use it too much.
You can save each given clip once, as a "motion photo," a very large (12 to 15MB) GIF, a video, or throw it out.
You can also save individual frames from the videos as stills.
[embed]https://www.youtube.com/watch?v=o54eI6lv6XM[/embed]
All of these clips don't look very good.
They're jerky at 15 frames per second (you need at least 24fps for video that looks smooth to modern eyes), with washed out bright areas and little shadow detail.
More than that, though, Clips videos are deeply unflattering.
The fish-eye is a big part of this: people near the edges of the frame appear bloated and distorted, and since you aren't voluntarily framing your photos, you're likely to end up with clips where things you want to see are at the edges of the frame.
The camera's AI is designed to focus on human faces and take clips when it sees a face in frame.
Mostly, this works.
When I was at the comics convention, for instance, most of my clips came when I was standing at tables and talking to people, with the people in frame.
Clips handles the binary choice between zero people in frame, and one person in frame, pretty well.
[embed]https://www.youtube.com/watch?v=YL7QFhXQJEU[/embed]
At a party with some people always potentially in frame, though, a lot of the camera's choices were off.
I have a clip where my sister-in-law carries the baby through the frame so the baby's head is cut off; one of my hand adjusting the camera; one where my mom's back is blocking a good part of the frame; and many where there are beloved family members hideously distorted by the fish-eye view at the edges of the frame.
There are people in all of these clips, but beyond that, the composition is simply poor.
That's the camera's biggest problem.
Conclusions
Take your own photos.
I understand the attraction of an AI photographer liberating you from having to choose and frame shots when you're playing with a new baby or opening birthday presents, but Google Clips is not that photographer.
It takes weird, jerky, oddly shaped, fish-eye videos on an unexpected schedule that keeps you on edge.
It's an obvious case of a very rich company putting out an experimental product that it probably knows, in this iteration, isn't very good.
Google will use the experience of Clips to refine its AI for recognizing scenes and people, so it isn't a waste for them.
As for you, use your smartphone or digital camera, frame your shot, share it, and enjoy it.