Saturday, May 14, 2011

Fail-station Network

About a month ago, Sony online services was struck a rather large blow. Someone had hacked into the network, completely shut it down, and stole a multitude of customer information. However, the "stolen information" part wasn't revealed until a week later. Needless to say, there was an outcry from the community. Things got worse, as Sony was faced with lawsuits and subpoenas afterwards. Then, a couple weeks later, Sony Online Entertainment was hacked, with more customer information stolen, as well as potential credit card theft (although this time, Sony was kind enough to worn people right away). Today, the Playstation Network was relaunched with limited capabilities, with services to be fully relaunched by the end of the month.

Who caused the damage has been somewhat mysterious. As far as Sony could figure, it was caused by the publicly unknown (and aptly named) group Anonymous. Why them? Two reasons supported their argument. First, someone had actually left a logo as a taunt on the wrecked servers. Although the group as a whole has denied their involvement, they did not deny that there were a couple people who didn't get a memo to lay off the attacks, which leads to the second piece of "evidence." After Sony had started a lawsuit with a system modder (someone who makes alterations to either the programming or physical state of a console) known as "Geohotz" for creating alterations to allow homebrewing (another terms for using downloaded titles on the system) on their systems, Anonymous had initiated an attack as retribution, feeling that they were unfairly targeting him. The basis of Sony's actions was due to the idea of promoting piracy, which Geohotz never did. He advocated such modding for personal use, not to cause damage to Sony's company. The only reason Anonymous decided to call a truce was because they did not want to affect the community and the users.

In the end, I can't help but feel Sony shot themselves in the foot over this. Sony staunchly opposes piracy and punishes anyone they can who tries it. However, in some strange sense of irony, they never really took any preventative measure against truly defending it, relying simply on the technology at hand. Blu-rays, because of their unique format and date encryption, are naturally difficult to pirate (which is probably one of the reasons Sony advocated this format and used it for there Playstation 3s). For their networks, they used internal means to "encode" their data based on the programs that were naturally used to run the servers. However, they never really go as far to make things more difficult for people to pirate objects. In the end, they are about reaction, rather than preventative action.

While it's certainly a problem that they are suffering this hack with drastic effects on everyone (digital distribution has been halted for various companies as well as denying users content that they pay for), one has to wonder if Sony didn't have this coming? What if they had done more to stop such things from happening? If they focused on the long run rather than just deal with the problems as they come, would there have been potential to stop the network hacks? While there is still the possibility of someone still getting in, there would still be less blame to put on Sony's shoulders had they followed this route.

Saturday, May 7, 2011

Real 3D vs. Conversion 3D

I saw Thor last night until early this morning (I'm just going to say it wasn't the smartest idea to see it right before midnight) in 3D. The movie itself wasn't bad. It was a great character study between Thor, his brother Loki, and their father Odin. A lot of the characters felt rather pointless and unnecessary, but the core story shined through quite strongly. The 3D wasn't bad either. It was nothing special, but it helped accentuate some of the vast locations. However, I felt that it could have looked better. There is a reason for this: the film was converted into 3D from a 2D film source. The process isn't complicated, but it is quite tedious and takes several months to get a strong image. This makes me wonder why the producers behind this didn't just use a 3D camera to begin with?

The fact of the matter is, real 3D has a stronger sense of depth to it, due to the fact that it captures images similar to the way we as people view something. It capture the image at two different angles with the ability to adjust the main plane or object of focus in the image. It is, in a sense, a complete image. With 3D conversion however, a duplicate of the image is made and warped to mimic the angular difference that would be perceived from the different eyes. There are several problems with this. First off, it doesn't look like true 3D. With real 3D recording, there isn't just depth of the plane, but there is obviously more depth with the object. With converted 3D, unless done properly, the objects are described as looking like cardboard layered on top of each other. While there is depth in plane, the objects appear rather flat. Another problem is that because of the image warping, it can result in what is basically a "squashed" image. In essence, it isn't as wide as compared to real 3D.

This makes one ask why would one prefer to use converted 3D versus the real 3D? One key reason is due to the size of the camera. The Fusion/ Pace camera system, co-developed by James Cameron and Vincent Pace, can capture a true 3D image, due to it's use of horizontal mirrors to capture an image, is considered to be a massive piece of equipment. Basically, one can only carry it at around 20 minutes at a time, due to it's size and weight. Another problem is that, because of his size, it's hard to get complex shots where one has to move around comfortably. This would be the main basis for conversion; it allows one to get more complex shots with ease. The other reason is that the cost of the equipment is rather high, costing tens of thousands. With conversion, despite it's long time, still cost less in comparison.

The idea of converted 3D vs real 3D is under hot debate. Although when used properly, converted 3D looks quite decent (specifically, Piranha 3D and Alice in Wonderland), but it still lacks the overall punch real 3D has. The quality of converted 3D is generally weaker and less pronounced, and sometimes non-existent given the situation (such as Clash of the Titans, which at some points had no 3D image whatsoever). Still, one can argue that, because the technique is relatively new, it'll evolve over time. Some looked quite decent, and upcoming movies also look promising(such as Captain America).One can assume that at some point we wouldn't need the 3D cameras at some point when it's perfected.

Saturday, April 30, 2011

Future Proofing is Obsolete

My girlfriend and I have been preparing to work on a new desktop computer. It's a general interest sort of thing: partially for education, partially for entertainment. The fact of the matter is, we are sinking quite a bit into to make sure that it'll last us a long time. However, something has always bothered me with this, which has recently been made fun of in a Best Buy ad... why am I bothering with this? Why bother trying to make something great when in just a few months to the next year, the next generation of parts will come out?

Simply put, technology is evolving. If anything, it's growing faster than we are. IBM's new computer Watson, a recognition software, was just put to the test a couple months ago on Jeopardy. Even with the two greatest legacy players (one set a record for most income in a given period while another set a record for most days on), it managed to win the two games it was on. It has become a technological marvel that is able to think more like a human and separate information based on word choice and the manner they're used in.

It frightens me to think that this technology will probably be considered obsolete in a decade. Not only that, it will be compressed into some smaller and more efficient, much like microchips today are as powerful as the first supercomputers, and cost only a fraction of the price. It also frightens me that what we are building will probably be outdated in a matter of 2 or 3 years. Also some depressing is that in that time, something will be twice as powerful and probably around the same price. This begs the question: is "future-proofing" really worth it?

Saturday, April 9, 2011

The Troubles With Audio: Immersion vs. Quality

One of those most important factors to many of us is the concept of sound. Granted, I still find visuals integral in telling a story (which is the reason I'm an animation major), but I can't help but feel that audio can be the most important factor in creating something (at least when mixing the two together). There's no doubt that on their own, these creatures are two different monsters. It's to visually express sound sound to a deaf person, like the rhythmic strums of a guitar; similarly, it's near impossible to describe the full visual brilliance of a painting through words alone to a blind man, like the use of blending complimentary colors in an organic way. I could go on about both, but I want to focus on audio today. The reason being is that I just got myself the remastered version or Rush's album "Moving Pictures." Why? It came with a blu-ray (a high definition disc meant to contain higher quality audio and visuals, depending on the usage).

I listened to the album and it sounded FANTASTIC. This is no doubt the best I've heard some of this music in quite some time. Generally I've only been able to hear the music on the radio (and because of the limited bandwith, a lot it lost in the process), or go as far as playing their song "Limelight" on the game "Rock Band," which enables one to play the song in Dolby Digital (a codec that enables a person to hear higher quality sound with true surround due to processed audio separation in each of the channels). It sounded great, but despite the cleanliness of the audio, the amount of compression, there is still data lost. According to the information case, it's practically hearing what one hears in the studio, due to increased information being played.

Before I continue, I should probably explain more about the concept of data. Basically, there are several factors that one must know about: the sampling rate (the amount of times something is played per second), the bit depth (how much data is present in each individual sample being played), and the amount of channels being used (or speaker number). This results in the eventual "bit rate," or how much data is being played. The thought of this came to me as I listened to this new, higher resolution mix of the music. The overall bit rate for this music (in surround sound) was 13.8 megabits per second (mbps). Compared to a regular CD in stereo, this is about 10 times the bit rate (which is 1.411 mbps). The reason being is because for the bluray, the sampling rate was increased to 96khz from 44.1 khz and the bit depth was increased to 24-bit encoding to 16-bit encoding. For a CD, this results in about 700kbps per channel. For the blu-ray, it averages out to about 2.76 mbps per channel.

The reason I'm mentioning this is because I owned a music DVD as well involving Cirque du Soleil's "Love," involving a remix of some of the music of The Beatles. The music was on a DVD and did surround as well. The bit rate averaged to about 1.5mbps (on DTS, or Digital Theater Surround). However despite this higher bit rate, it's nowhere near as good as a CD, which is considered "uncompressed." The reason? Despite a high sampling rate, the bit-depth is lower and the audio is dispersed amongst more channels. In essence, on the highest encode, the bit rate averaged out to 300kbps (although in all accuracy, it would be close to 400kbps, due to the way it's processed). This made me realize that, for the longest while, we really weren't getting the full with music when we listened to it on a DVD. The blu ray on audio/ super-audio CDs are relatively new. This makes me wonder, is the overall quality really worth sacrificing for the "immersive" experience?

Well, that's a questionable factor. The idea of surround sound is to make the music seem more life, like a concert. It also helps separate the tracks, so one can hear instruments in the individual channels. Plus, it IS rather difficult to discern the sound difference at that much of a bit rate difference. However, for some audio purists (and those with better hearing), quality is a deal breaker. It results in certain acoustics being "erased," which results in a duller sound and a lack of a great range in both volume and frequencies. However, with the deteriorating hearing levels of people, one has to wonder if quality is simply a secondary thought now.

Saturday, April 2, 2011

3DS: First big leap without glasses



As I've discussed before, 3D is a technology that has been bouncing back and forth in popularity since the 50s. However, it wasn't until fairly recently that the technology began to evolve. What started off as dual colored glasses (red/ cyan), it evolved into polarized lenses (image is adjusted to the glasses) and active shutter glasses (visibility of an image is alternated between eyes at 60 frames per second for each eye), which kept the original image in tact. Now, with the 3DS, Nintendo's newest handheld, the viewer can now see 3D images without glasses.

The Nintendo 3DS utilizes a technology that is still being worked on called auto-stereoscopy, which is a source of passively viewed 3D that is generated on actively. Two screens are used to generate the image. The reason the eye is able to discern them is due to something called a "parallax barrier," which helps to merge the images by blocking the the opposing screens from the eyes, allowing each eye to perceive the images differently.


This does two things in comparison to traditional projected 3D image. This not only widens the image, showing more (think of is as 16:9 widescreen vs. a Pan and Scan 4:3 screen ratio), but it also enhances the overall definition and resolution of the image, making high resolution and clearer. There is, however, a drawback with this technology. Due to the way screen and barrier are displayed, the viewing angle is EXTREMELY limited. Unlike more traditional 3D, which enables you to see 3D images at almost any angle you view it at, One has to look at the image straight on to get the full 3D effect. Although one can alter the image vertically without affecting it, if one alters the image horizontally, it completely ruins the effect (making the images uneven or visibly separated). The second issue is that it's also more difficult to view the image over longer periods of time in comparison to watching a 3D image with glasses. While one is able to watch an entire movie with glasses, one can begin to suffer eyestrain at around 30 minutes on average with auto-stereoscopic viewings.

While the viewing seems considerably more organic with auto-stereoscopic viewing, the stress and narrower viewing angles seem to make less more enjoyable of an experience in comparison of the use of glasses, as it's easier on the eyes in the long run. Granted, it is still a technology that is still early in development. In terms of experimentation, it is fascinating technology. No doubt that as time progresses, the technology will evolve and become better, creating less strain and a better viewing angle for viewers.