Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.
X
Post

HTC One UltraPixel Camera: Software Update Improves Picture Quality

The HTC One is a pretty amazing device, not necessarily because it packs the latest technology, such as a 1080p display or a Snapdragon 600 processor, but also because HTC has done a lot of things differently. The One differentiates itself from the rest of the Android bunch by delivering an all-aluminum unibody design, front-facing speakers and a pretty interesting camera that has been dubbed “Ultrapixel”.

You’re most likely familiar with this sensor’s specifications, and to be honest, HTC has taken a pretty big risk with it. Not many people are aware that the amount of megapixels on a camera isn’t what makes a picture look good, and while the rest of smartphone manufacturers are constantly increasing the MP count, HTC has switched gears and decided to focus on what really matters, regardless of the marketing risks.

Initial testing has shown that the main camera on the HTC One is pretty impressive, even when compared with the iPhone 5.
Interestingly enough though, the images captured with the Ultrapixel sensor and shared during the past few weeks aren’t showcasing the final product’s capabilities.

To be more precise, according to gforgames, Hardware Zone of Singapore has received two HTC One testing units. The second unit they’ve received featured a new camera software update, and the difference in image quality is noticeable to say the least.

The good news is that, according to HTC, all the One units that will hit the market in the near future will pack this latest firmware.

All in all, HTC continues to impress us with its latest flagship phone and we’re eager to see it reaching the pockets of One enthusiasts across the globe.

Any thoughts regarding the gadget’s main camera?

Post

Android 4.2 Photosphere – An Interesting Way Of Taking 360 Degree Photos/Panoramas

As we told you a few days ago, our new main goal here at Chasing Mills is to expand our horizons a bit. We’ll still be a Chasing WindMills fan-blog but we’ll also talk a bit more about cinematography and photography, give you tips on how to start and maintain your own VLOG and also discuss about the latest advancements in technology. And talking about technology, today we’re going to talk about a piece of software that, even though some think it’s not yet ready for prime-time, brings a new, easy way of taking 360 degrees pictures: Android 4.2′s Photosphere.

Alright folks, so Android 4.2 has been around for quite some time, but as many of you are already aware of by now, so far this latest OS iteration has been made available mainly on Nexus devices. That being said, there are a few interesting features that you’re pretty much missing out on, assuming that you don’t own a Nexus smartphone, so let’s talk a bit more about one Android 4.2 feature in particular and see what you can expect from it once this version of Jelly Bean will arrive on your non-Nexus device, whichever it may be.

Android 4.2 Photosphere - An Interesting Way Of Taking 360 Degree Photos/Panoramas
Android 4.2 Photosphere – An Interesting Way Of Taking 360 Degree Photos/Panoramas

Android 4.2 Photosphere

With Android 4.2, Google has implemented a new camera feature that the photographer in you should be excited about. We’re obviously talking about Photosphere, a nice little addition in the camera application that allows the user to take 360 degrees panoramic shots.

In case you’re familiar with the regular panoramic view then you should be familiar with how this feature works. In case of regular panoramic shot, you capture one frame, then a set of instructions appear on the screen, guiding you to move your device to the left or right of your initial shot. 360 degrees Photosphere works pretty much the same way, the biggest difference however is that you can take 360 degrees panoramic photos –obviously. Also, with Photosphere you have more freedom as to which shot you can take next, but you do have to make sure that your camera is focusing on the spot that has been indicated by the software.

All in all, this particular camera app addition is quite interesting to say the least. True, it takes a bit of time to cover a 360 degree angle with multiple pictures and that can become a tedious task at times, but it’s a cool feature nonetheless.

[UPDATE] If you’ve been keeping up with the mobile news, you might already know that, according to rumors, Samsung is going to implement a similar feature in its upcoming Samsung Galaxy S4 smartphone, called Samsung Orb. Given Sammy’s habit of taking tech ideas from other developers/manufacturers (Google in this case) and improving it, we have high hopes that the process of taking 360 degrees “photo-spheres” will be even more streamlined and improved. However, this is just a rumor for now. As far as the Samsung Galaxy S4 launch is concerned, nothing is set in stone, but the end of Q1 / start of Q2 seems like a probable candidate.

Check out the video below if you wish to see the app in action and enjoy!

Post

A Look Back: What Terminator 2 Meant For The industry

In the old days, if you wanted King Kong to climb a building, you had two choices: stop-motion photography, a painstaking hand-animation of a tiny model, performed one frame at a time, or cel animation, which involves filming hundreds of hand-drawn illustrations from plastic “cels.” Now, there’s third, more effective alternative — computer-generated imagery (CGI).

Nowadays CGI might be everywhere (in fact most of us are sick of poorly made CGI movies) but this technique has been brought to everyone’s attention by Terminator 2: Judgment Day. The effects work in this film was, at the time, dazzling — a brilliant combination of old and new techniques that made audiences actually believe the unbelievable.

Our computer animation department tripled in size for that film,” notes Jill Jurkowitz, spokeswoman for Industrial Light & Magic (ILM), which handled the CGI end of the job. (Produced on a large budget and a tight schedule, T2′s effects used a combination of CGI, models and masks supplied by ILM, Fantasy II, 4-Ward Productions and makeup maven Stan Winston.) The ILM sequences found the evil terminator (Robert Patrick) slithering through a barred window, having his head split in two by a shotgun blast, walking out of a flaming inferno and literally falling to pieces in a final fiery moment. Jurkowitz claims the most difficult effect was a sequence in which the evil terminator pours himself into a helicopter seat and then reconstitutes himself as a person. “We had to match dialog and motion,” she says, “which can be especially tricky.”

Yet little is simple where CGI is concerned. For any body metamorphosis (or “morphing”), a technician must give the computer the first frame in the transformation and the final result; the machine figures out how to fill in the transitional frames so that the first form mutates fluidly into the last.

That means creating mathematically defined three-dimensional computer models for all the characters or sets, then making an onscreen wireframe display of the model. The model is run through tests for fluidity and then “rendered,” often the longest process. At this point, surface texture, color, lighting, shadows and reflections are added — to get it right, hours can be spent on a single frame. Digital compositing combines the figure with previously filmed background scenes. The finished sequence is electronically transferred back onto film, a single frame at a time.

“What we did took a lot of labor,” says Jurkowitz, who reports that 35 ILM technicians worked for a year to craft just 50 shots for Terminator 2. That’s less than five minutes of screen time, at a reported cost of $6.4 million. Expensive? Yes. And the sweetest irony? Viewers have no idea about the painstaking work involved. But that’s what magic is all about.

Abstract: Computer animation special effects used in the film, ‘Terminator 2: Judgment Day’ were handled by Industrial Light and Magic. Very labor intensive, the computer-generated imagery took 35 technicians a year to produce five minutes of screen time at a total cost of $6.4 million.

Post

Small Update

Hi guys,

We’re back with a small update in order to inform you what’s going on with Chasing Mills.

We though we’d diversify a bit and starting with today (or tomorrow, depending on when the next article will be ready for the masses) we will also talk more about technology – mainly about DSLR, cameras (even smartphone cameras) and everything else that can be related to shooting and producing your own VLOG.

And we will start adding more episodes from your favorite VLOG – Chasing Wind Mills, of course.

All the best,

Chasing Mills Team

Video

Episode 1: Light Sleeper – How It All Began

Post

About Chasing Windmills and Its Creators

Chasing Windmills was a weekly VLOG and a pretty successful one at that (it won the “Best Entertainment Vlog award in 2006“). The creative team behind this World-Wide-Web Drama, Cristina Cordova and Juan Antonio del Rosario (who were also the lead characters), have since gone onto bigger projects but this doesn’t mean we should forget about “Chasing Windmills“.

Nope, we won’t do that. Matter of fact, we will start re-publishing its weekly episodes and make sure that this inovative (at least at the time – we’re talking 2006) vlog drama won’t be forgotten. And besides that, people with simmilar interests will have the opportunity to see how such ambitious projects can be managed and what it takes to pull this off.

But first, let’s see how it all started.

Juan Antonio del Rosario and Cristina Cordova
Juan Antonio del Rosario and Cristina Cordova

When they first heard about vlogging, Del Rosario and Cordova were living in Puerto Rico. They were writing screenplays for a living, always looking for ways to inovate in this at times stale industry.

Usually, video blogs are small series of documetaries but the duo came up with another idea: what if they could tell the world a story through small, 2-3 minutes, videos.

At first, they had a pretty hard time coming up with something meaningful but, one day, they considered adapting a screenplay they were working on: “The Couple” – a story about a troubled relationship. They took the screenplay’s characters and put them into some of the most weird and awkward situations they could think of.

In the meantime, the due moved to Minneapolis and decided, in the end, to give this little project a try and this is how this daily VLOG, titled Chasing Windmills, came to life.

Got your interest ? Well, stay tuned as we’ll start publishing highlights (or who knows? maybe the entire series) along with more insight and maybe interviews.

Post

What Does VLOG Mean ?

Vlogging, slang for Video Blogging, is a form of Web TV which uses Video as the main medium. Even though the video is the central, and almost definitory part, of video blogging, entries also include (more often than not) text and images.

Vlogging has become really popular in eraly 2000s when decent equipment became more affordable and when the software and hosting deprtments became capable of handling such tasks that might seem common now but back then were new and innovative.

What Does VLOG Mean ?
What Does VLOG Mean ?

If we think back, VLOGS and PODCASTS (we could call a PODCAST the audio only version of a VLOG) have skyrocketed in popularity with the help of Apple’s iPod (the first model was launched on November 10th, 2001) and video sharing websites like Youtube (created by former PayPal employees in 2005 and now owned by Google) and Yahoo! Video (launched in June 2006, now known as Yahoo! Screen).

The creator of the first VLOG is Adam Kontras who, on January 2nd, 2000, has posted a video on his blog depicting his move to LA. This was the first post of what would eventually become the longest running Video Blog. However, the first videoblogger conference (Vloggercon) was held only 5 years later, in New York City.