The 1994 James Cameron film True Lies starring Arnold Schwarzenegger was recently re-released in Ultra HD 4K disc format giving viewers the opportunity to watch these classic films in unprecedented detail.
Not only True Lies but Cameron’s The Abyss and sci-fi classic Aliens were also released on Ultra HD Blu-ray with Geoff Burdick, senior vice president of Lightstorm Entertainment, who tells The New York Times that he thinks they “look the best they’ve ever looked.”
But not everyone agrees.
“It just looks weird, in ways that I have difficulty describing,” the journalist Chris Person tells The Times. “It’s plasticine, smooth, embossed at the edges. Skin texture doesn’t look correct. It all looks a little unreal.”
Or even
This reminds me of arachnophobia mode in Grounded. Harvophobia filter.
Steve Harvey at home.
Personally I think the example shot in the thumbnail looks worse after being “enhanced.” Arnold’s hair was a dead giveaway, shit just looks weird.
Came here to say exactly that. Tom Arnold also just looks plastic in the “enhanced” one.
It’s a freeze-frame of a high-speed action sequence with motion blur. You almost certainly wouldn’t notice in that second shot. Maybe in the first, which was a slower tracking shot.
That’s true.
I think my biggest gripe with it is that it looks like when I abruptly change my smart bulbs from warm white to cold white. lol. It’s jarring and unpleasant at first and definitely takes a minute or five to adjust to it.
The fabric on their suits also looks weird!
This is like early CGI effects in film. Some of those effects are the worst in film history (see Reptile from Mortal Kombat), and some were so good that we don’t even know CGI was used (See helicopters in Black Hawk Down). This is a new technology which is going to be abused majorly in tons of notable cases and we probably won’t notice the instances where it was used successfully.
The tech is clearly not sophisticated enough at this point to reliably enhance film images realistically. However, this technology at this stage of development would probably be excellent for old animation or films whose originals have been severely deteriorated.
Now wait for Gen A to grow up and start using bad AI smoothing as a desirable retro effect, like vinyl crackle, tape hiss or obvious autotune on vocals.
Looks like one of those paintings that are almost real. I don’t know what they are called, but damn.
I think you mean hyperrealism:
Thank you :)
South Park addressed this many years ago.
22 years ago… sigh, the cycle repeats, and repeats, and 🔁…
I’m sure to make their point the authors chose some of the more egregious examples as stills for this article but godamn that really does look like shit. What were they thinking? It doesn’t even sound like a cost saving measure if the original negatives exist. The purported reasoning around it not being about the condition of the negatives but instead an opoortunity to improve on the original doesn’t make sense because you’d at the very least want to start with the original negs before “improving” the film and this phrasing makes it sound like they didn’t and considering the still in this article, it looks like they didn’t either. The way they describe the use of the technology maybe could be a net positive at some point, but this sure doesn’t seem to be an example of that. Did they just not have access to the negs or something? Was there some bizarre licensing arrangement that prevented them from doing this the traditional way? This looks so much more like an elaborate working around an obstacle rather than an even better than ideal value add kind of move. Like, if somehow all prints and copies of the film in existence disappeared except an old VHS this would be an admirable and impressive way to get to from that to a UHD release, but as a first choice option it seems like madness. It seems pointless to do this now until the tech is literally a superior result to a new remaster from the original film.
The long term goal is having an automated process to restore old films cheaply since doing it manually is a long process that requires expertise. A limited talent pool for a time intensive process is the obstacle they are trying to overcome.
They are not thinking about it from the viewer’s perspective, just how they can market that they did technically restore it with something that is passable as a quality improvement in the eyes of the majority of buyers.
What Peter Jackson did with They Shall Not Grow Old was great and efforts like that to actually restore old films should be supported, but movies from the 80s don’t need it.
There are plenty of films from every decade that would benefit from a good quality remaster, especially for HD.
Sure, there is also a ton of crap that aren’t a priority, but that has always been true.
would benefit from a good quality remaster, especially for HD
“Remaster” being the key word. Creating a “master”, old or new, involves making decisions about how to best translate the author’s ideas onto a given medium, not just running a generic algorithm and calling it a day.
I bet an AI could do it… some day. But it won’t be a simple pattern matching one, which doesn’t take into account the author’s intent.
As long as a good job is done I think AI upscaling the movies and removing compression artefacts and such is amazing. And people who don’t like it can still watch the OG version.
Yeah honestly this is exactly the type of thing that AI is good for – creating pixel resolution where it once did not exist. It’s just what definition of AI… We’ve been using, relying, on this type of tech for over a decade now it’s simply just under the scrutiny of AI now. Let me preface, there are really terrible upscales out there. And the article itself even tries to clarify the difference between enhanced and generative, enhanced meaning upscaling resolution and creating detail, and generative meaning re-creating the scene as an upscaled version. The issue is that enhanced upscaling is still a bit of work, as it’s the digital version of manually going through each frame and painting it. Thereby, generative AI is cheaper and of course a giant company would spend resources towards making the worst-best version of that over the best-best version of something else…
The former, enhancing, can be done quick and dirty, but it’s often done fairly checked for reproduction – things like dark vs. light scene, fog, depth of field – these things can be affected by the upscale and there are ways to make each “scene” look really good, it just can’t be all done at once. This is why live action upscales haven’t been and still aren’t super great, because it’s often done the lazy way with a one full pass upscale.
The latter, generative, doesn’t matter how much effort you put into each “scene” manually. Because it’s recreating what exists instead of expanding what exists, it is pretty much always going to have this uncanny valley effect. As noted by others, you can literally see the Generative AI patterning in his hair and on the smoothing on his face. It’s bad. Enhanced upscaling does not do this whatsoever.
In regards to relying on it upscaling, Content Aware Fill has been around since 2010. It never got looked down on until it got shifted to Stable-Diffusion Aware Fill. Genuinely, this was a precursor because while the standard usecase is of course to remove a bird from a pretty sky picture, another thought became to increase the image size and content aware fill the background space. Rarely was it looked down on (especially since it did often need manual interventions to touch it up.) Now you do the same thing with Stable Diffusion but with copyright infringement. I don’t disagree with that, sentiment, but I do not much care for how they are equated.
Anyway, similarly, programs like Topaz and tons of other image and video upscalers have been around using CPU conversions, it only truly boomed alongside AI because it uses TensorCores and then easily translated from developing CUDA upscaling into more AI focused upscaling algos (laymans, not exact for brevity.) Only in the last couple, maybe single year has Generative AI really been “used” to upscale, and honestly hasn’t really been available on its own until recently. I would hazard a guess that these upscales are more of a hobby project and sharing them than
All this to say, there are some really good upscales from 4 years ago that are worth the time of day, if that is what you are interested in. Generally speaking, live action is harder and not as good. As another mentioned, animated works are generally pretty good, as it’s basically upscaling to reduce visual aliasing – the biggest visual defects from animation upscaling are larger visible chunky blocks, and if it’s done poorly shifting linework (I personally never had this issue when I did my upscaling projects). And I should further clarify I suppose – everything mentioned is hobbyist work. I’m not talking about the LucasFilms Enhanced 4k stuff, nor any of that sort of thing.
Simply taking the OG video file, segmenting it, and enhancing each version and resplicing it. Really been on it for a long time and it’s insane to me that this is the approach that Hollywood goes for.
The tech seems to work better with cartoons. But, it is not perfect either.
Looks fake as fuck. Also gave the dude more defined wrinkles with smooth skin in between. Uncanny valley shit.
Ryan added that Schwarzenegger’s and Tom Arnold’s faces look like they are “made out of putty.”
Duh. If the AI changed that, they would be unrecognizable.