The 50th anniversary marking the assassination of JFK has once again reignited the debate as to what actually happened on that day. Specifically, did Lee Harvey Oswald act alone, or was there a second gunman involved in the shooting. For those who seek to re-examine the events half a century on, the only evidence left to go on is The Warren Commission Report, a handful of (50 year old) eye-witnesses memories and the super8 films taken on the fateful day. While the Zapruder film is best known, there were also a number of other films that capture some or all of the events on that day from a variety of angles.
Despite the numerous conspiracy theories surrounding the shooting, there is little physical evidence remaining that verifies any of the alternate conclusions. The simple fact is that all we have left today are the surviving films and the story they replay frame by frame. It may be likely that conspiracy was involved behind the scenes (a subject for another time), but these films provide no evidence that Oswald was assisted by a second gunman in the assassination.
But what if Zapruder had a smart phone? Would we have seen the missing clues revealing a conspiracy? Would the events on that fateful day be exposed and the mystery solved, or would we be no wiser? Let’s take a look:
Super8mm film resolution v Smartphone
The best super8 cameras with premium glass lenses and good grade film have a standard resolution comparable with 720p after high end digital processing. In contrast a smart phone today will claim a higher resolution usually 1080p, but the standard phone lens has a small aperture which reduces overall image resolution. It may be a contentious issue, but even with the Hollywood image techniques available today, it is doubtful that if Zapruder was holding a smart phone the pictures would have been any better. It also worth mentioning here that use of Super8 required the operator to look down a viewfinder while a smartphone has a large LCD screen. Some of the other films were poorly framed because of this, although this does not apply to Zapruder’s film.
Capturing audio of the shots
Super8mm cameras of the era typically didn’t record sound so there is no definitive audio recording of the sounds and sequence of the three gunshots. This missing evidence has turned out to be one of the major points of conjecture over whether Oswald acted alone. Could he have fired all three shots in just 6 seconds (if you use Zapruder film as the only reference), or was the Zapruder camera off at the time of the first shot and the actual time frame for the three gunshots closer to nine seconds? Also complicating this investigation is that the timing of the gunshots can only be verified in individual film frames which show the bullets that met their mark. Everyone agrees there were three shots and the first shot missed its target, but no one can say when it was fired, in relation to the other two shots. This is a key piece of evidence required to support any alternate theories.
Capturing vision of other gunmen
Standard Super8mm cartridges of the era typically were 50 feet of film, with a duration of around 3minutes at 24frames/second or 3.5 minutes at 18frames/second. Film was expensive and so was always conserved. Zapruder was attempting to maximise his film capture by recording at 18fps. Additionally, the Zapruder film was probably not rolling at the time of the first shot, as he was saving film, waiting for the motorcade to come closer. Likewise, many of the other cameras of the day were not rolling at the time of the shooting for the same reasons. This would not have been an issue with a smartphone and in addition to capturing the audio, Zapruder may have also captured other aspects of what happened. There might be a better account of what happened on the grassy knoll or vision that proves (or vindicates) one way or another involvement of other members of the motorcade. However, like any camera recording, the only scenes captured will rely on the camera operator pointing it in the right direction at the right time; and there is never any guarantee of that in a moment like this the camera will be looking in the right direction.
So overall, even if Zapruder was using a smartphone, it might not give us any more by way of visual clues. That said though, it would undoubtedly confirm the timing and sequence of the shots and the audio recording could possibly tell us if a second weapon was also discharged. This is clearly the most important issue in confirming whether or not Oswald acted alone on the day. But 50 years on, there is no way we will ever close this case for certain and even if Zapruder had been using a smartphone there would still be questions of conspiracy. The only thing we do know is that if everyone present that day had a smartphone in their hands, then there would be no doubt at all about what happened, how it happened and who was responsible.
Watch FiST Chat 148: JFK Assassination 50th Anniversary for more on this topic.
Netflix is an amazing success story. Started in 1997 with just $2.5m, the business was a mere DVD postal service, and within 10 years had introduced a video streaming service for movies. Now, in 2013 it has produced a slew of exclusive in-house series representing an unprecedented move into bespoke content, distributed exclusively through their own distribution networks.
Buoyed by the public demand for their new productions, Netflix are now taking this a step further by announcing the production of full-length features. This daring feat of production positions them on the same turf as the Hollywood film studios and premium cable production houses like HBO. However, Netflix boasts one important difference in that it owns its own private subscription network, and doesn’t rely on deals with exhibitors or commercial advertisers to survive.
But will this crucial difference in business model allow it to reverse engineer Hollywood success, or will it represent a costly misstep in an otherwise great enterprise? This is a fascinating scenario where the disruptive technology of the internet might allow a newcomer to claim some of the gold held tight by the classic Hollywood business model. This is a gladiatorial battle no less, ultimately worth billions of dollars, but can Netflix win this contest worthy of a Hollywood blockbuster in itself?
Does Netfilx have the creative capacity to make great films?
Judging by the popularity of Netflix 2013 stable (Orange Is The New Black, House of Cards etc), they certainly do have the ability to make great films. However, creating a feature film is a completely different craft to producing serialized material. Early on expect Netflix to produce some genuinely fresh and quirky stories from up and coming writers featuring a highly talented but lesser known cast. A strategy like this could allow it to corner the mid-size production market, where Hollywood struggles to make a return. The big question here is that traditionally a feature film is released in a cinema and this is a kind of quality assurance label. Without a cinema experience, how will Netflix differentiate its features from its serials? This is an all important question that they will need to address.
Does Netflix have the ability to make Blockbuster films?
Probably not in the traditional Hollywood sense. Blockbusters cost a lot of money even when they do well and when they fail its nothing short of a career ending disaster. The current Hollywood model while close to breaking under the strain of new technology, still has the distribution model in its favour where they get several revenue bites via cinema, on-line and dvd (soon to be 4K) release. Although very expensive (and over-valued) the Hollywood production model also has the capacity to wow us with lucrative big brands, big names and extreme merchandising opportunities.
Do they have the technology to compete with Hollywood?
The simple answer is no when it comes to production, yes when it comes to distribution. It is incredible to think that a business with a mail order list could successfully transfer its entire operations to the internet and makeover its business in the process. By contrast over the same timeframe Hollywood has been stumbling along with its transition to the digital medium. For the time being Hollywood will continue to rule the cinema, but if you believe Spielberg, the longer term and more important battle for movie audiences will be fought in the home.
Which business model is right?
To date independent film makers using traditional business models for film production have failed to make money using the current online platforms. This is one area where Netflix truly has an advantage. They have their own subscription list (over 40 million subscribers) and they can market direct to their audience without the traditional costs associated with distributors, exhibitors and sales agents. While this will equal a substantial saving, the weakness in this model is Netflix receives the same return on one viewer as they do for a group of 6. Depending on how people choose to view the films may well set the upper budgets available to Netflix producers.
Does the public want Netflix produced films?
If Netflix produce fresh high quality movie content in the same way they are producing their internet series then the answer is a resounding yes. More important is that if successful, Netflix will rewrite the business model for film production and distribution. At this point there will be no choice for Hollywood except to sit up and take notice. But despite the promise displayed by the prospective Netflix feature enterprise, it is not going to be an easy victory. Hollywood still holds the cards and dictates the industry, through the big names in film, the ultimate cinema experience, and (soon) the power of take home titles in Ultra High Definition. No doubt Netflix will make great films, fresh with creativity, but they won’t be blockbuster. And by rights, no matter how good your film, you can’t claim the crown in Tinseltown until you can beat Hollywood at its own game.
Watch FiST Chat 147: Discovering The Causes Of Bushfires for more on this topic.
This week India successfully launched its Mangalyaan space probe on its 10 month journey to Mars. Quite an amazing milestone when you consider that the mission was only announced publicly 15 months ago and the only other successful launches of this kind have been by the big three: US, Russian and European Space Agencies. Whether the probe makes it to its distant destination safely and completes its scientific survey is another question altogether. Over half the probes launched since 2001 have failed to deploy successfully for a variety of reasons, a fact which highlights the risks of long space journeys.
Either way, it is very exciting to see another nation engage in a serious commitment to space exploration. In more recent years the GFC has severely curtailed budgets for the big three agencies but at the same time we have seen both India and China expand their programs and commence their own mini-space race. Since China’s failed launch of their Mars mission in 2011, their focus has shifted toward a commitment to Luna exploration and the completion of a permanently manned space station. Perhaps the Indian success will be enough to entice a response for them to continue with their Mars program.
Both of these nations have set ambitious goals for their space agencies, and both countries have responded with amazing progress. While critics often point to the cost of space exploration, particularly in a country like India where poverty is rife, the tangible benefits are often overlooked. Large scale science, tech and engineering projects like space exploration provide real opportunity for the future generations of these nations. The pursuit of new endeavours, brings with it new knowledge, skills, and industries as well as generational aspirations and credibility on the global stage.
Proof that these programs have a positive effect on human ingenuity are readily demonstrated by Indian Space agency’s mission solutions for Mangalyaan. The probe was launched using a much smaller and comparatively underpowered rocket in comparison to their US or Russian counterparts. This didn’t stop them devising a number of unorthodox fixes including having the probe spend a whole month in earth orbit where it will generate the speed required to slingshot it to Mars.
By comparison the prospect of a human mission to Mars by the big three agencies seems complicated to the point where it might just be simpler to abandon. In the risk adverse system the Western world is constructing for itself, mitigating all risks on an incredibly dangerous human mission to mars is next to impossible. Not that the careful planning and desire to construct a return mission should be unreasonably criticised, but the current approach and budget cuts are making it almost unworkable.
For that reason the film, The Mars Underground is worth a look. It follows rocket scientist, Robert Zubrin and examines his plan for getting humans to Mars in the next ten years. A passionate space enthusiast and visionary scientist, Zubrin’s take on a manned mission to the red planet is both provocative and inspirational. However, his peers are often sceptical of his plans as they fall outside of what is considered standard practice, even though he presents them as quite achievable. Watching this documentary made me wonder if anyone from the Indian or Chinese space programs had seen it, and if they have, could it be possible that one of these emerging nations might soon shock us all by being the first nation to send humans to the red planet?
Another day in the life of the Post-PC world. As I play around with OS Mavericks I ponder the questions, “Did Steve Jobs leave us too soon and would things be different if he were still here today?”, “If Bill Gates stayed at Microsoft, would people have liked Windows 8 better?” and “Google Docs are plain evil, why can’t they fix them?”
It then hit me that what the world needs now is not love, but an ultimate user interface (U-UI). An operating system so truly post PC that it just blows the others away for good. Like the tablet killed the desktop. Surely it can’t be that hard. The technology exists but we are being held back by old PC business models and a lack of imagination. To fix this, I visualised myself as CEO of my own tech company and conceptualised all the features my new U-UI should incorporate.
1. Works out of the box/ updatable
Just like the original Mac OS, let’s open the box, switch it on and see the device come to life with a chime. The device should have basic and customisable software and be cloud ready. All my old apps, music and files should immediately appear on my new machine and update the interface automatically, configuring it identically to my old device. As time goes on you can update the U-UI if you want, but you’re not obliged to. Just like some people still run XP on their PCs today if you are happy with the speed and operation of your U-UI, you shouldn’t have to upgrade just to stay compatible.
2. Simple Universal – cross platform – open
Windows 8 (in metro), is possibly the only proper cross platform UI on the market. It might not be everyone’s favourite interface on a PC, but it works as well on a Nokia phone as it does a Surface tablet (irony???). Google and Apple seem to be creeping in this direction without being able to take a leap of faith. Part of this design should be to keep the flexibility of the U-UI open and move away from the lockdown. I don’t want to buy another $80 add on from Apple. I want to source my own hardware and peripherals and connect them to my devices and configure them how I want, just like in the good old days.
3. Cross system functionality
Yes, cross system functionality. If you can dual boot a Mac, why can’t you boot any PC, tablet or phone in your preferred interface? U-UI allows you to do this while running in the background. I don’t care if the engineering team or the sales department tell me it can’t be done, because I know it can. In the immortal words of Mr Jobs “I don’t want excuses- just do it, Sh#th3@ds!”
4. Gesture and voice control
Almost all desktops, laptops, tablets and mobiles have cameras and voice activated assistants, so perhaps it’s time we started using them for things besides ‘selfies’ and Google searches. It’s time that simple voice commands and gestures (including eye movements) were all incorporated meaningfully into the user experience so creative types can get the most out of their workflows as they move between applications. This only applies to a small amount of users, but those who work in the creative field will know exactly what I mean. And yes, the mouse and the keyboard are still essential components of this creative setup.
5. Re-design your own U-UI and include a DIY App builder
Visual programming has been around for ages, so how about we incorporate it directly into the U-UI allowing users to re-configure the interface any way they want? This is the ultimate in customisation and one of the reasons the kids today love Android. But why not take it a step further so you can customise the UI with purchased apps, or wait for it, ones that you have built yourself. Yes, give the user control of their system and hardware, just like they had in 1976 in the pre-PC world.
Of course there will be plenty who say this is just a dream, but it shouldn’t be. In a truly Post-PC world all of these features should be on offer sooner rather than later. With everything rapidly shifting toward the cloud, soon all we will be left with is the device we hold in our hands and that interface should be the U-UI.
Watch FiST Chat 145: Lifecycles of Post-PC Devices for more on this topic.
The current feature release Gravity is receiving great reviews, not the least for its stunning visual production. The success of the film is in part due to the use of CGI for effect (not gimmick), and believe it or not, the adaptation of some good old fashioned studio rigging to create visual effects in camera. One invention ensured the lighting correctly simulated sunlight in low earth orbit while the other allowed the actors (and camera) to correctly simulate astronauts experiencing zero G.
It’s good to see old fashioned technology being adapted with great effect by Hollywood, and CGI being sensibly integrated into a film. Tinseltown has been responsible for championing some of the greatest advances in cinema technology, and with every step forward better films and stories become possible. Whether the technology used in Gravity will lead to an explosion in super-realistic space films remains to be seen, but here are five technologies we take for granted today that revolutionized film making over the ages.
1. The Motion Camera and the Silent Film Era
After the invention of the movie camera in the 1880s and until the 1920s, film was silent, although cinema pianists often played live during screenings. Thanks to some very inventive theatrical techniques, and without the constraints of having to include clean audio for spoken parts, silent film was a mix of action, physical theatre and mime. It gave rise to the legends like Buster Keaton and Charlie Chaplin who mastered the early craft as well as truly great films that still stand the test of time like Metropolis and Nosferatu.
2. Synchronous Sound
During the 1920s the first film’s featuring audio tracks were exhibited. This technology allowed a new genre of film, the talkie, and with it ushered in scripts with dialogue driven plots. Today it’s hard to appreciate what a giant step this was for the film industry and in many ways it represented a rebirthing of the craft. In fact synchronous sound was such a revolutionary advance at the time some of the established industry recognised the threat it represented to the status quo and denounced it. Needless to say, the classic silent genre couldn’t compete with the new talkie even though the two distinct styles co-existed until the mid 1930s.
It’s not often acknowledged that animated films have been around almost as long as film itself. The first star of this format was Felix the Cat in 1919 followed soon after by the emergence of the Disney productions. Animations worked well with synchronous sound and by the early 1930s the industry was in full swing with Disney and Looney Tunes producing some of history’s most enduring animated characters. However animated films have continued to develop in step with technology to produce an incredible list of all-time favourites including Toy Story, Shrek and Finding Nemo. Animated films generally offer a quality of cinema experience that captures the essence of film, the suspension of disbelief. On this basis alone, it relegates 3D technology to nothing more than an overrated studio supported fad.
4. SFX and CGI
Before CGI, there was SFX, which involved a camera and some real time, real life illusionary effects. While special effects have been around as long as film itself, the zenith for SFX were during the 1970s/1980s and the production of the original Star Wars trilogy by George Lucas with his team at Industrial Light and Magic. Using scale models, special lighting and tricks involving celluloid they took the film experience to new heights. At the same time computing technology was creating the first CGIs, but these were mostly cameos until the return of the second Star Wars trilogy. In 2001 Final Fantasy was the first film based completely on photorealism and live action, and then all of these technologies were combined to make the 3D photorealistic world in James Cameron’s 2009 classic, Avatar.
Digital film making technologies has completely revolutionized the film industry. It has allowed the creation of both modern animated genres and the realistic imagery of CGI generated environments. George Lucas was the first major film-maker to embrace digital as a medium in 1999, saving almost $5m in film stock while producing The Phantom Menace. From this point on many successful independent and guerilla film-makers have embraced the format as a cheap, easy and accessible way to produce feature length films.
Since then the digital medium has had a further impact on the industry, as it has also changed the way films are distributed and exhibited. In this regard it is a both a major advance for Hollywood as well as disruptive technology that will forever change the craft of filmmaking. In many ways, digital has matured film to a point where it will depart from what we know, in much the same way synchronous sound changed the silent era. It might be difficult to predict where the technology will take us, or what new experiences future films will make possible, but we already know it will change the way we make and view films forever.
Watch FiST Chat 144: Groundbreaking Sci-Fi With Gravity for more on this topic.
Bill Gates admission that the (now) much loved Windows “Control-alt-delete” logon sequence was a mistake took some time for me to process. Sure, with the benefit of hindsight a single key would have been preferable, but the outcome could have been far worse. The intransigent IBM engineer responsible could have insisted on a manual logon using the Command Prompt window. Back then that would have been perfectly reasonable. Think about it. It is only in recent times you’ve been able to drop http://www. from your web addresses.
The “Control-alt-delete” situation was a very common situation back in the early PC days. Early hardware was primitive, and it was the techies job to create software that allowed the hardware to perform all those amazing tasks. It was a tough ask. There were no designers, there were only engineers, and by and large those engineers designed technology for other engineers working with computers.
This situation became entrenched so that by the time the public started to embrace PCs, computers were made and designed by engineers without any regard for the end user experience. This situation scared, frustrated and put off many people in the early days of personal computing. It was also responsible for making Steve Jobs look like a genius for delivering users the Mac OS.
To his credit, Jobs really did care about the user experience and seemed to be the only man in Silicon Valley who recognized ease of use as being vitally important to the bottom line. Apple’s remarkable success today is a result of Jobs’ visionary perspective 30 years ago and it highlights the not so secret formula for creating great software: build it for the end user. Make it simple, make it relevant and make it work.
Historically the greatest software failures have occurred when a company produces software that suits their needs and not that of the customer. It’s a recipe for disaster, which is repeated over and over. Here are just a few of the big mistakes and how things went so wrong.
Windows ME & Vista
These operating systems from Microsoft are tied as arguably two of the most disappointing ever. Technically these might not be the worst examples but no software company has ever promised so much (twice) and delivered so little to the long suffering PC user. ME was an overhyped update of Windows 98 and sold with the backdrop of the Y2K fear factor. Thanks to a ridiculously self-congratulatory marketing campaign, Vista was much vaunted and anticipated, but on its release proved even more awful than anyone could have imagined. Both systems were dreamed up by marketing teams to boost sales, yet delivered none of the simplicity or stability that users really wanted and damaged Microsoft’s image in the process.
It wasn’t that Norton’s anti-virus was bad, because as far as anti-virus software went it was quite effective. As a result it might be slightly unfair to include it, but it does underline the situation were the cure was worse than the disease. Designed to hunt out and seek viruses no matter where they were hidden on your computer, with it turned on in the background it slowed your computer to a crawl. If only those who wrote the code had actually lived in the real world of the home PC user.
Sony BMG copy protection
In the 2000s, SonyBMG came up with a great software solution to prevent music piracy. While you can’t blame them for doing this, they were so intent on stamping out copyright theft, they completely ignored the consequences of their actions and shipped their CDs with Digital Rights Management software. The software was installed when the disc was inserted into the drive, without user’s permission and was effectively deemed spyware. Lawsuits followed and an embarrassed Sony BMG discontinued the copy protection program.
Final Cut Pro X
For a long time, creative types felt indebted to Apple for creating a computer, an OS and then Final Cut Pro as a simple, affordable, reliable platform for video editing. In all its incarnations, Final Cut offered a great user experience. Then in a single software release, Apple undid all this good work by delivering Final Cut X. Incompatible with the old system, counter intuitive in just about every way and hated universally in a single moment, Apple undid over a decade of goodwill amongst video editors the world over. Plenty of conspiracy theories abound as to why they did what they did, but whatever the reason it was clear they meant to kill off FCP for good.
The Apple Maps fiasco of 2012 holds a special place in this list. The advent of Apps in many ways has been a turning point for software. Apps are designed for the end user, and their very interface is typically user friendly as well as functional. However, in their haste to get the Apple Maps app to market, Apple released an embarrassing hash of a clearly unfinished product that was inaccurate and ultimately unusable. It would never have happened during the Jobs era, and just goes to show, that even an industry leader like Apple can screw up if they try to take shortcuts.
Watch FiST Chat 142: Control-Alt-Delete Was A Mistake for more on this topic.