So you’re thinking of developing a mobile app, huh? Where to start? Android? iOS? Windows? BlackBerry? I think it’s safe to assume that most developers will forego the latter and focus on Android and iOS, especially if they are interested in generating revenue. So how to decide between starting with Android or iOS? Really, the decision is easy – for as plethora of reasons.
Let’s start with money. The best place to make a profit is on iOS. Generally speaking, Apple is usually viewed by start-ups as the ideal platform to design for due to its larger and more affluent customer base. Even though Android has a commanding lead in market share (84.7% compared to 11.7% for iOS) and downloads that are 60% higher than iOS, revenue is still 60% higher on the iTunes App Store! Maybe you’re scratching your head, asking “how can this be?” Well, allow me to explain. There are three main reasons for this “fuzzy math.” First, iOS devices are generally more expensive and appeal to higher income consumers – those more likely to spend more money on apps and in-app purchases. Secondly, iOS doesn’t have much market share in developing countries like Android does – and Android has been slow to adopt carrier billing, which means users can’t even purchase apps if they wanted to! Furthermore, while Apple forces users to put in their credit card information, it is merely an option for Android users. Lastly, iOS is a “closed” system, making it much harder to pirate apps, which is, unfortunately, pretty common with Android apps, thereby creating a negative revenue stream. Take a look at this chart for easy reference:
So, money is important, but it’s not everything, right? There are other reasons why one might choose iOS over Android. For example, Apple iOS developers spend most of their time coding. Android developers? They use the bulk of their time testing and debugging their code, according to an Evans Data report. Why? It’s due to Android fragmentation, which forces developers to spend more time testing disparate hardware, a problem no other mobile platform has. Given that there are over 1,600 devices in the Android SDK, it's not surprising Android developers must spend an inordinate amount of time testing and debugging. Even though Apple does have higher standards when it comes to app design, iOS is in fact considerably easier to develop for. Conversely, Android’s current development tool is currently an unwieldy piece of software named Eclipse.
This might lead you to ask, “Why bother developing on Android at all??” It’s fragmented, involves a more difficult platform, requires a lot of debugging and doesn’t make all that much money. One reason might be that Android apps can be downloaded from a number of different stores such as Google Play, the Amazon App Store, or any number of independent app stores. Having multiple places to download Android apps can be great for the consumer because they have a choice. The downside? It creates inconsistency between rankings and reviews, the number one thing customers look at when they go to download an app.
With all that being said, with Android’s massive user base and its rapid adoption in developing countries, it can be a great start, especially for a beginner or amateur – it’s much easier to get your apps published on Android than with iOS because Android’s review process is much less stringent than Apple’s.
So what’s the bottom line? If you’re just getting started and/or not looking to make a profit from your awesome apps, I would recommend trying both and drawing your own conclusions – in fact, please post your comments here! For those who are more experienced and are looking to turn your tech savvy skills into a monthly paycheck, it seem iOS is the way to go.
Back in September 2014, I published a blog titled “Hardware, Software, Anywhere, Everywhere?”, which discussed the ever-increasing abilities of consumer technology to track our every move. Likening our modern-day “trackableness” to the tenets of the infamous novels “1984” and “Brave New World”, I posed the question: When does it become too much?
Now, it seems that almost daily, my inbox is flooded with white papers, case studies, and Webinar invitations that address the non-stop security threats that we all face every day – individuals and companies alike. So how do we address them? According to the experts, even more “tracking” than already exists.
For Joe Consumer, the threats involve stolen social security numbers, passwords and credit cards (thank goodness one of my credit card companies in now using the EMV “chip” technology – the U.S. is finally catching up to our friends ‘across the pond’!) Approximately 120 million Americans have already received an EMV chip card and that number is projected to reach nearly 600 million by the end of 2015, according to Smart Card Alliance estimates.
But for corporations (large or small), the security threats are quite complex. It’s not just outside cybersecurity hackers anymore – it’s “insider threat”: the new guy you just hired in HR, or the disgruntled contractor, or the veteran employee in accounting who consistently comes into the building on the weekends, or even the HVAC technician coming to take a look at your air conditioning! Why the HVAC guy? Visual Hacking - All he needs is a camera phone to take a picture of a computer screen with login credentials and he can take control. According to an experiment published by CSO Online, low tech visual hacking was successful nine out of ten times when they spent up to two hours in 43 different offices, wandering around, taking pictures of computer screens, and picking up documents marked "confidential" and putting them in their bags -- all deliberately within full view of the regular employees.
In a paper published by Quantum Secure, a Physical Identity and Access Management platform, it is asserted that the way companies hire employees has changed drastically, thereby impacting the enterprise’s security - they are more likely to have contractors, part-time workers or even virtual employees, and as a result, the chances are greater that unfamiliar people may be onsite or offsite and still accessing data. Quantum Secure suggests that HR data should be monitored closely to look for “red flags” and “high risk” individuals. For instance, someone who got a bad performance review may be a “high risk” employee. Someone who recently got divorced or had a baby may be “flagged” because this life status change could indicate a chance in their financial status. If either of these two hypothetical individuals were also accessing the physical building after hours, using the printer too much, or had access to sensitive data, they might be placed on the “watch list!” Have we gotten to a point in society and in the corporate world where every single, solitary movement is tracked and analyzed? Why does this remind me of the “terror watch list”? Here is an excerpt from the paper that goes into more detail:
Understanding whom to track among permanent and temporary employees, and how to track them, is only the beginning. Enterprises need to adhere to several best practices to become highly efficient at identifying potential insider threats before they become a reality. What’s the best approach?
First, as noted, enterprises need a system that connects each of the appropriate data sources: human resources databases, physical security systems and IT logs. The system must be highly flexible, easily integrated and unfailingly accurate.
Next, they need to set up a monitoring system to look at information aggregated from those sources: issues that change infrequently, such as job titles; triggering events, such as performance reviews; and behavioral changes. By establishing a baseline of behavior, enterprises can create profiles and risk scores associated with all levels of employees: permanent, part-time, contractor, virtual.
The next step is to establish a high-risk score and identify employees that fall into that range. Establishing this score gives enterprises the ability to focus on only the highest-risk roles; after all, enterprises can’t track everyone who uses the printers or the photocopiers. Anyone in those roles—or applying for those roles— should be subject to initial as well as periodic background checks. Identifying high-risk employees enables enterprises to focus their efforts on only the most likely roles and helps them take proactive steps to review access, segregate roles to eliminate conflict or schedule more-frequent (or more-detailed) audits and reviews.
The overarching point here is to embark on these efforts armed with context. Relying solely on data points such as printer usage or evening access represents an incomplete approach to insider threats. This data is useless without the context of other events, behaviors or attributes. Enterprises that rely on simple patterns run the risk of diminishing morale and creating an unnecessary culture of suspicion.
But bringing these data points together enables enterprises to create the equivalent of a “watch list” that can be used for permanent employees as well as other categories. For instance, the system may track contractors’ employees for safety and security violations. If they decrease, no harm done. If they increase, the enterprise can engage in remediation such as retraining, limiting access, requiring an escort or reporting the violations to a manager.
Now, I never expect to be one of these “watch list” folks, but it does make me uncomfortable to think that it exists - that my ‘life status change’ could potentially trigger something in the “system” that automatically makes me suspect!
I understand the need for physical and cyber security. I really do. But I also insist on my privacy. I hope that as more and more companies understand and proactively implement additional security measures, they do so with full disclosure to their current and future employees…Forewarning is fair warning.