Category Archives: Research Blog

An Internet of Drones

by Robert J. Hall

AT&T Labs Research

 

The safe operation of drones for commercial and public use presents communication and computational challenges. This article overviews these challenges and describes a prototype system (the Geocast Air Operations Framework, or GAOF) that addresses them using novel network and software architectures.

 

The full article can be accessed by subscribers at https://www.computer.org/cms/Computer.org/ComputingNow/issues/2016/07/mic2016030068.pdf

Drones, flying devices lacking a human pilot on-board, have attracted major public attention. Retailers would love to be able to deliver goods using drones to save the costs of trucks and drivers; people want to video themselves doing all sorts of athletic and adventuresome activities; and news agencies would like to send drones to capture video of traffic and other news situations, saving the costs of helicopters and pilots.

Today, both technological and legal factors restrict what can be achieved and what can be allowed safely. For example, the US Federal Aviation Administration (FAA) requires drones to operate within line-of-sight (LOS) of a pilot who’s in control, and also requires drones to be registered.

In this article, I will briefly overview some of the opportunities available to improve public and commercial drone operation. I will also discuss a solution approach embodied in a research prototype, the Geocast Air Operations Framework (GAOF), I am working on in AT&T Laboratories Research. This prototype system has been implemented and tested using simulated drones; aerial field testing with real drones is being planned and will be conducted in accordance with the FAA guidelines. The underlying communications platform, the AT&T Labs Geocast System, 1-3 has been extensively field tested in other (non-drone) domains with Earth bound assets, such as people and cars. The goal of the work is to demonstrate a path toward an improved system for the operation of drones, with the necessary secure command and control among all legitimate stakeholders, including drone operator, FAA, law enforcement, and private property owners and citizens. While today there are drones and drone capabilities that work well with one drone operating in an area using a good communication link, there will be increased challenges when there are tens or hundreds of drones in an area.

Note that some classes of drone use are beyond the scope of this discussion:

• Military drones. The US military has been operating drones for many years and are the acknowledged world experts in the field. However, its usage scenarios are quite different, and many of its technical approaches are out of scope for this discussion, because they have resources and authority that are unavailable (such as military frequency bands) or impractical (high-cost drone designs and components) to use in the public/commercial setting. Instead, we seek solutions whose costs are within reason for public and commercial users and which do not require access to resources unavailable to the public.

• Non-compliant drones. It will always be possible for someone to build and fly drones that do not obey the protocols of our system. For example, we will not discuss defense against drones, such as electromagnetic pulse (EMP) weapons, jamming, or trained birds-of-prey.4 However, we hope to work toward a framework for safe and secure large-scale drone use, analogous to establishing traffic laws for cars.

• Drone application-layer issues. Obviously, drones should actually do something useful once we have gone to the trouble to operate them safely. Often, this takes the form of capturing video or gathering other sensor data. This article does not address the issues involved in transferring large data sets from drone to ground or drone to cloud.

The rest of this article will give background on the communications system underlying the GAOF, the challenges of safe and scalable air operations, and how the GAOF addresses these challenges.

Chaos Engineering

by:

Ali Basiri, Niosha Behnam, Ruud de Rooij, Lorin Hochstein, Luke Kosewski, Justin Reynolds, and Casey Rosenthal,

Netflix

This is an excerpt of the article published in the July 2016 edition of Computing Now at  https://www.computer.org/cms/Computer.org/ComputingNow/issues/2016/07/mso2016030035.pdf

 

Modern software-based services are implemented as distributed systems with complex behavior and failure modes. Chaos engineering uses experimentation to ensure system availability. Netflix engineers have developed principles of chaos engineering that describe how to design and run experiments.

 

THIRTY YEARS AGO, Jim Gray noted that “A way to improve availability is to install proven hardware and software, and then leave it alone.”1 For companies that provide services over the Internet, “leaving it alone” isn’t an option. Such service providers must continually make changes to increase the service’s value, such as adding features and improving performance. At Netflix, engineers push new code into production and modify runtime configuration parameters hundreds of times a day. (For a look at Netflix and its system architecture, see the sidebar.) Availability is still important; a customer who can’t watch a video because of a service outage might not be a customer for long.

But to achieve high availability, we need to apply a different approach than what Gray advocated. For years, Netflix has been running Chaos Monkey, an internal service that randomly selects virtualmachine instances that host our production services and terminates them.2 Chaos Monkey aims to encourage Netflix engineers to design software services that can withstand failures of individual instances. It’s active only during normal working hours so that engineers can respond quickly if a service fails owing to an instance termination.

Chaos Monkey has proven successful; today all Netflix engineers design their services to handle instance failures as a matter of course.

That success encouraged us to extend the approach of injecting failures into the production system to improve reliability. For example, we perform Chaos Kong exercises that simulate the failure of an entire Amazon EC2 (Elastic Compute Cloud) region. We also run Failure Injection Testing (FIT) exercises in which we cause requests between Netflix services to fail and verify that the system degrades gracefully.3 Over time, we realized that these activities share underlying themes that are subtler than simply “break things in production.” We also noticed that organizations such as Amazon,4 Google,4 Microsoft,5 and Facebook6 were applying similar techniques to test their systems’ resilience. We believe that these activities form part of a discipline that’s emerging in our industry; we call this discipline chaos engineering. Specifically, chaos engineering involves experimenting on a distributed system to build confidence in its capability to withstand turbulent conditions in production. These conditions could be anything from a hardware failure, to an unexpected surge in client requests, to a malformed value in a runtime configuration parameter. Our experience has led us to determine principles of chaos engineering (for an overview, see http://principlesofchaos .org), which we elaborate on here.

Fatal Tesla Self-Driving Car Crash Reminds Us That Robots Aren’t Perfect

Tesla-Model-S-Elon-Musk-2011

This article is obtained from http://spectrum.ieee.org/cars-that-think/transportation/self-driving/fatal-tesla-autopilot-crash-reminds-us-that-robots-arent-perfect?utm_campaign=TechAlert_07-21-16&utm_medium=Email&utm_source=TechAlert&bt_alias=eyJ1c2VySWQiOiAiOWUzYzU2NzUtMDNhYS00YzBjLWIxMTItMWUxMjlkMjZhNjE0In0%3D

null
Photo: Bloomberg/Getty Images

On 7 May, a Tesla Model S was involved in a fatal accident in Florida. At the time of the accident, the vehicle was driving itself, using its Autopilot system. The system didn’t stop for a tractor-trailer attempting to turn across a divided highway, and the Tesla collided with the trailer. In a statement, Tesla Motors said this is the “first known fatality in just over 130 million miles [210 million km] where Autopilot was activated” and suggested that this ratio makes the Autopilot safer than an average vehicle. Early this year, Tesla CEO Elon Musk told reporters that the Autopilot system in the Model S was “probably better than a person right now.”

The U.S. National Highway Transportation Safety Administration (NHTSA) has opened a preliminary evaluation into the performance of Autopilot, to determine whether the system worked as it was expected to. For now, we’ll take a closer look at what happened in Florida, how the accident may could have been prevented, and what this could mean for self-driving cars.

According to an official report of the accident, the crash occurred on a divided highway with a median strip. A tractor-trailer truck in the westbound lane made a left turn onto a side road, making a perpendicular crossing in front of oncoming traffic in the eastbound lane. The driver of the truck didn’t see the Tesla, nor did the self-driving Tesla and its human occupant notice the trailer.  The Tesla collided with the truck without the human or the Autopilot system ever applying the brakes. The Tesla passed under the center of the trailer at windshield height and came to rest at the side of the road after hitting a fence and a pole.

img
Image: Florida Highway Patrol

Tesla’s statement and a tweet from Elon Musk provide some insight as to why the Autopilot system failed to stop for the trailer. The autopilot relies on cameras and radar to detect and avoid obstacles, and the cameras weren’t able to effectively differentiate “the white side of the tractor trailer against a brightly lit sky.” The radar should not have had any problems detecting the trailer, but according to Musk, “radar tunes out what looks like an overhead road sign to avoid false braking events.”

We don’t know all the details of how the Tesla S’s radar works, but the fact that the radar could likely see underneath the trailer (between its front and rear wheels), coupled with a position that was perpendicular to the road (and mostly stationary) could easily lead to a situation where a computer could reasonably assume that it was looking at an overhead road sign. And most of the time, the computer would be correct.

Tesla’s statement also emphasized that, despite being called “Autopilot,” the system is assistive only and is not intended to assume complete control over the vehicle:

It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.

I don’t believe that it’s Tesla’s intention to blame the driver in this situation, but the issue (and this has been an issue from the beginning) is that it’s not entirely clear whether drivers are supposed to feel like they can rely on the Autopilot or not. I would guess Tesla’s position on this would be that most of the time, yes, you can rely on it, but because Tesla has no idea when you won’tbe able to rely on it, you can’t really rely on it. In other words, the Autopilot works very well under ideal conditions. You shouldn’t use it when conditions are not ideal, but the problem with driving is that conditions can very occasionally turn from ideal to not ideal almost instantly, and the Autopilot can’t predict when this will happen. Again, this is a fundamental issue with any car that has an “assistive” autopilot that asks for a human to remain in the loop, and is why companies like Google have made their explicit goal to remove human drivers from the loop entirely.

The fact that this kind of accident has happened once means that there is a reasonable chance that it, or something very much like it, could happen again. Tesla will need to address this, of course, although this particular situation also suggests ways in which vehicle safety in general could be enhanced.

Here are a few ways in which this accident scenario could be addressed, both by Tesla itself, and by lawmakers more generally:

A Tesla Software Fix: It’s possible that Tesla’s Autopilot software could be changed to more reliably differentiate between trailers and overhead road signs, if it turns out that that was the issue. There may be a bug in the software, or it could be calibrated too heavily in favor of minimizing false braking events.

A Tesla Hardware Fix: There are some common lighting conditions in which cameras do very poorly (wet roads, reflective surfaces, or low sun angles), and the resolution of radar is relatively low. Almost every other self-driving car with a goal of sophisticated autonomy uses LIDAR to fill this kind of sensor gap, since LIDAR provides high resolution data out to a distance of several hundred meters with much higher resiliency to ambient lighting effects. Elon Musk doesn’t believe that LIDAR is necessary for autonomous cars, however:

For full autonomy you’d really want to have a more comprehensive sensor suite and computer systems that are fail proof.

That said, I don’t think you need LIDAR. I think you can do this all with passive optical and then with maybe one forward RADAR… if you are driving fast into rain or snow or dust. I think that completely solves it without the use of LIDAR. I’m not a big fan of LIDAR, I don’t think it makes sense in this context.

Musk may be right, but again, almost every other self-driving car uses LIDAR. Virtually every other company trying to make autonomy work has agreed that the kind of data that LIDAR can provide is necessary and unique, and it does seem like it might have prevented this particular accident, and could prevent accidents like it.

Vehicle-to-Vehicle Communication: The NHTSA is currently studying vehicle-to-vehicle (V2V) communication technology, which would allow vehicles “to communicate important safety and mobility information to one another that can help save lives, prevent injuries, ease traffic congestion, and improve the environment.” If (or hopefully when) vehicles are able to tell all other vehicles around them exactly where they are and where they’re going, accidents like these will become much less frequent.

Side Guards on Trailers: The U.S. has relatively weak safety regulations regarding trailer impact safety systems. Trailers are required to have rear underride guards, but compared with other countries (like Canada), the strength requirements are low. The U.S. does not require side underride guards. Europe does, but they’re designed to protect pedestrians and bicyclists, not passenger vehicles. An IIHS analysis of fatal crashes involving passenger cars and trucks found that “88 percent involving the side of the large truck… produced underride,” where the vehicle passes under the truck. This bypasses almost all front-impact safety systems on the passenger vehicle, and as Tesla points out, “had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents.”


If Tesla comes up with a software fix, which seems like the most likely scenario, all other Tesla Autopilot systems will immediately benefit from improved safety. This is one of the major advantages of autonomous cars in general: accidents are inevitable, but unlike with humans, each kind of accident only has to happen once. Once a software fix has been deployed, no Tesla autopilot will make this same mistake ever again. Similar mistakes are possible, but as Tesla says, “as more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing.”

The near infinite variability of driving on real-world roads full of unpredictable humans means that it’s unrealistic to think that the probability of injury while driving, even if your car is fully autonomous, will ever reach zero. But the point is that autonomous cars, and cars with assistive autonomy, are already much safer than cars driven by humans without the aid of technology. This is Tesla’s first Autopilot-related fatality in 130 million miles [210 million km]: humans in the U.S. experience a driving fatality on average every 90 million miles [145 million km], and in the rest of the world, it’s every 60 million miles [100 million km]. It’s already far safer to have these systems working for us, and they’re only going to get better at what they do.

Henna delays fingerprint-secured graduations in India

30_1_2014_20_20_48_t3tsra0mn4ts1hu6ccngvlafv7_wa3b7kc77l

See more at: http://www.planetbiometrics.com/article-details/i/4716/desc/henna-delays-fingerprint-secured-graduations-in-india/#sthash.3GbIo1wv.dpuf

 

Students hoping to graduate from an educational course in India have had to wait to confirm final results as henna applied to their fingers disrupted a biometric authentication tool. The biometric device rejected the thumb impressions of “scores” of female candidates who applied “Mehendi” traditional henna makeup during the Eid festival in the city of Hyderabad, reported Sisat. With only couple of days are left for completion of the verification process which escalated students’ anxiety, girls were seen trying to erase the mehendi with the help of detergent and even hydrogen peroxide. An official said that clear instructions on the matter were provided during the time of application, but the girls didn’t take it seriously. He said that light coloured henna were able to match using the, but the dark colours were not recognised. Students were asked not to panic as they have until later this week to provide verification details. – See more at: http://www.planetbiometrics.com/article-details/i/4716/desc/henna-delays-fingerprint-secured-graduations-in-india/#sthash.3GbIo1wv.dpuf

Cloud

This article is taken from:

Encyclopedia of Cloud Computing ©Wiley 2016

By San Murugesan and Irena Bojanova

 

Clouds are powerful change‐agents and enablers. Several converging and complementary factors are driving the rise of cloud computing. The increasing maturity of cloud technologies and cloud service offerings coupled with users’ greater awareness of the cloud’s benefits (and limitations) is accelerating the cloud’s adoption. Better Internet connectivity, intense competition among cloud service providers (CSPs), and digitalization of enterprises, particularly micro‐, small‐, and medium‐sized businesses, are increasing the clouds’ use.

Cloud computing is changing the way people and enterprises use computers and their work practices, as well as how companies and governments deploy their computer applications. It will drastically improve access to information for all as well as cut IT costs. It redefines not only the information and communication technology (ICT) industry but also enterprise IT in all industry and business sectors. It is also driving innovations by small enterprises and facilitating deployment of new applications that would otherwise be infeasible.

The introduction of new cloud computing platforms and applications, and the emergence of open standards for cloud computing will boost cloud computing’s appeal to both cloud providers and users.  Furthermore, clouds will enable open‐source and freelance developers to deploy their applications in the clouds and profit from their developments. As a result, more open‐source software will be published in the cloud. Clouds will also help close the digital divide prevalent in emerging and underdeveloped economies and may help save our planet by providing a greener computing environment.

Cloud Ecosystem

In order to embrace the cloud successfully  and harness its power for traditional and new kinds of applications, we must recognize the features and promises of one or more of the three foundational cloud services – software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). We must also understand and properly address other aspects such as security, privacy, access management, compliance requirements, availability, and functional continuity in case of cloud failure. Furthermore, adopters need to learn how to architect cloud‐based systems that meet their specific requirements. We may have to use cloud services from more than one service provider, aggregate those services, and integrate them on premises’ legacy systems or applications.

To assist cloud users in their transition to the cloud, a broader cloud ecosystem is emerging that aims to offer a spectrum of new cloud support services to augment, complement, or assist the foundational SaaS, IaaS, and PaaSofferings. Examples of such services are security as a service, identity management as a service, and data as a service. Investors, corporations, and startups are eagerly investing in promising cloud computing technologies and services in developed and developing countries. Many startups and established companies continue to enter into the cloud arena offering a variety of cloud products and services, and individuals and businesses around the world are increasingly adopting cloud‐based applications. Governments are promoting cloud adoption, particularly among micro, small, and medium enterprises. Thus, a new larger cloud ecosystem is emerging.

Addressing the Challenges and Concerns

While hailing the features of existing and emerging new cloud services that help users adopt and tailor the services they use according to their needs, it is important to recognize that the cloud ecosystem still presents a few challenges and concerns. Such concerns are those relating to performance interoperability, the quality of service of the entire cloud chain, compliance with regulatory requirements and standards, security and privacy of data, access control and management, trust, and service failures and their impact. All these issues need to be addressedinnovatively, and this calls for collaboration among various players in the cloud ecosystem.

Good news is that investors, established corporations, and startups are eagerly investing in promising cloud computing technologies and services, and are willing to collaborate (to an extent) to raise the clouds to newer heights. We can hope for a brighter, bigger, more collaborative cloud ecosystem that benefits all of its stakeholders and society at large. Cloud service providers, the IT industry, professional and industry associations, governments, and IT professionals all have a role to play in shaping, fostering, and harnessing the full potential of the emerging cloud ecosystem.

Gaining Cloud Computing Knowledge

To better understand and exploit the potential of the cloud – and to advance the cloud further – practitioners, IT professionals, educators, researchers, and students need an authoritative knowledge source that comprehensively and holistically covers all aspects of cloud computing.

Several books on cloud computing are now available  but none of them cover all key aspects of cloud computing comprehensively and meet the information needs of IT professionals, academics, researchers, and undergraduate and postgraduate students. To gain a holistic view of the cloud, one has to refer to a few different books, which is neither convenient nor practicable.

The new Encyclopedia of Cloud Computing, edited by us and published by IEEE Computer Society and Wiley this month, serves this need. It contains a wealth of information for those interested in understanding, using, or providing cloud computing services;  for developers and researchers who are interested in advancing cloud computing and businesses, and for individuals interested in embracing and capitalizing on the cloud. In this encyclopedia, we offer a holistic and comprehensive view of the cloud from different perspectives.

Face recognition software in trucking

Caterpillar Safety Systems has partnered with Seeing Machines to install fatigue protection software in thousands of mining trucks. According to a Huffington Post report, the software uses a camera, speaker and light system to measure signs of fatigue – for instance eye closure and head position. When a potential fatigue event is detected, the system sounds an alarm and sends a video clip of the driver to a 24-hour sleep fatigue center at Caterpillar headquarters, the report adds.

The following article is obtained from http://www.huffingtonpost.com/entry/caterpillar-sleep-fatigue-center_us_577d4c2ce4b0a629c1ab9b58

by Krithika Varagur

Associate Editor, The Huffington Post

 

Don’t forget to click the links because they come with useful information.

 

Why A Mining Company Is Getting Into Face Recognition Software

Drowsy driving is notoriously tough to detect. There’s no test to prove it, the way a breathalyzer can prove someone was driving drunk. But technology to detect drowsy driving is in the works.

In commercial transport, one industry is leading the way: mining. The stakes are particularly high in this field since the enormous haul trucks used in mining are several times the height of a person. Imagine dozing off at the wheel of one of these.

Caterpillar Safety Services, a consultancy branch of the global mining company, has partnered with the tech company Seeing Machines to put fatigue detection software in thousands of mining trucks around the world. The software uses a camera, speaker and light system to measure signs of fatigue like eye closure and head position. When a potential “fatigue event” is detected, the system sounds an alarm in the truck and sends a video clip of the driver to a 24-hour “sleep fatigue center” at Caterpillar headquarters in Peoria, Illinois.

At that point, a safety advisor contacts them via radio, notifies their site manager, and sometimes recommends a sleep intervention.

“This system automatically scans for the characteristics of microsleep in a driver,” Sal Angelone, a fatigue consultant at the company, told The Huffington Post, referencing the brief, involuntary pockets of unconsciousness that are highly dangerous to drivers. “But this is verified by a human working at our headquarters in Peoria.”

Caterpillar has a four-year license from Seeing Machines to manufacture the software. For now, it’s the exclusive provider of this technology within the mining industry. Some 5,000 vehicles ― a combination of Caterpillar’s own trucks and those of other mining companies ― carry the equipment. There are about 38,000 haul trucks worldwide, by Caterpillar’s estimate, so the fatigue-detecting trucks are still a small fraction of that, but Caterpillar hopes to eventually equip all of them.

When a “fatigue event” is recorded, it’s up to the mining site to recommend a course of action to the driver, or vice versa. Last month in Nevada, for instance, a mining truck driver had three fatigue events within four hours; he was contacted onsite and essentially forced to take a nap. Last February in North Carolina, one night shift truck driver who experienced a fatigue event realized it was a sign of an underlying sleep disorder and asked his site management for medical assistance. (Caterpillar has mining operations globally from China to Canada).

“It’s not unusual for someone to lose their frame of reference of what is normal in regard to fatigue,” said Angelone. This may be because miners’ shift work goes against typical human circadian rhythms. A driver’s shift is either eight or twelve hours long, said Angelone, but those shifts can occur during the middle of the night, late afternoon or any other time.

“Many sites run a 24/7 operation,” he said. “These drivers are not always sleeping through the night.”

In the past year, since the company started recording fatigue events last July, it has recorded about 600 instances, said Angelone. He said this constitutes a stunning 80 percent reduction in fatigue events from previous years.

The biggest reason for this, said Angelone, is that once an alarm goes off in a truck, the driver becomes much more aware of their fatigue, and is more cautious and proactive about drowsy driving than they would be otherwise.

These results invite the question of why fatigue detection software has not yet reached consumer vehicles.

One explanation is that the car industry has not been slow to embrace the technology, but that commercial trucking has been particularly fast.

“There is a lot of incentive to improve safety in our industry,” said Tim Crane, general manager of Caterpillar Safety Services. “Our vehicles are huge and pose unique challenges, so the government really wants to see that we’re trying.”

Crane expects the use of fatigue detection technology in consumer cars to increase “exponentially” in the next few years. Jeremy Terpstra of Seeing Machines echoed the sentiment.

“We have arrangements with many different car manufacturers,” he said. “It’s only a matter of time before this technology is in all vehicles, everywhere.”

Migrating Smart City Applications to the Cloud

This article is written by:

Michael Vögler and Johannes M. Schleicher of Technische Universität Wien

Christian Inzinger of University of Zurich

Schahram Dustdar of Technische Universität Wien

Rajiv Ranjan of Newcastle University


Smart city” has emerged as an umbrella term for the pervasive implementation of information and communication technologies (ICT) designed to improve various areas of today’s cities. Areas of focus include citizen well-being, infrastructure, industry, and government. Smart city applications operate in a dynamic environment with many stakeholders that not only provide data for applications, but can also contribute functionality or impose (possibly conflicting) requirements. Currently, the fundamental stakeholders in a smart city are energy and transportation providers, as well as government agencies, which offer large amounts of data about certain aspects (for example, public transportation) of a city and its citizens.

 

Increasingly, stakeholders deploy connected Internet of Things (IoT) devices that deliver large amounts of near-real-time data and can enact changes in the physical environment. Efficient management of these large volumes of data is challenging, especially since data gathered by IoT devices might have critical security and privacy requirements that must be honored at all times. Nevertheless, this presents a significant opportunity to closely integrate stakeholders and data from different domains to create new applications that can tackle the increasingly complex challenges of today’s cities, such as autonomous traffic management, efficient building management, and emergency response systems.

 

Currently, smart city applications are usually deployed on premises. Cloud computing has matured to a point where practitioners are increasingly comfortable with migrating their existing smart city applications to the cloud to leverage its benefits (such as dynamic resource provisioning and cost savings). However, future smart city applications must also be able to operate across cities to create a global, interconnected system of systems for the future Internet of Cities.1 Therefore, such applications have to be designed, implemented, and operated as cloud-native applications, allowing them to elastically respond to changes in request load, stakeholder requirements, and unexpected changes in the environment.

 

Here, we outline our recent work on the smart city operating system (SCOS), a central element of future smart city application ecosystems. The SCOS is designed to resemble a modern computer operating system, providing unified abstractions for underlying resources and management tasks, but specifically tailored to city scale. We present the specific foundations of SCOS that enable a larger smart city application ecosystem,2 allowing stakeholders and citizens to create applications within the smart city domain. This approach enables them to build applications by only focusing on their specific demand, while completely freeing them from the complexities and problems they’re currently facing.

 

Read more at https://www.computer.org/cms/Computer.org/ComputingNow/issues/2016/07/mcd2016020072.pdf

How To Choose A Quality Project Management Software For Your Company

The following article is obtained from http://www.unioncountyga.gov/Portals/0/Users/207/19/719/How%20to%20choose%20a%20quality%20Project%20management%20software%20for%20your%20company.pdf

 

When your company will start to grow and you will earn more profits, you will have to come up with new strategies. If you have developed a large company then you will need the things that will help you manage the whole business and to do so, you will require a software to manage the projects you are running. To manage these projects, we are going to share few things that will help you in getting the right software. Project management software will become your need if you are developed into a big company. Here are few things that you need to pay attention to so you can choose the right software for your company.

Evaluating the right things

First of all, you need to be sure that you are well aware of your needs and you are going to choose the project management software after paying attention to your company’s needs. You can come up with something like Celoxis.com tools for online project management. If you are working online and managing your employees online then you should consider getting a software that is best for online project management. Your evaluation process will include so many things andyou need to be sure that you are choosing the right product according to your needs. If you have selected the design and you really like the functionality of the software too then you need to be sure that you are going with that option. However, if you need to add few things to it then you need to get it tailored according to your needs so your employees can get the benefits from it.

Software support and security

Make it sure that you are selecting the software that has a great support from its developers and provides a great security. Security is something that you need to put on the first priority and you need to be sure that you are choosing the best quality software that comes with a lifetime support. If there is any fault in the software and you are unable to resolve it then you should contact the support center so they can provide you the fix for it. Same goes for the security and you need to be sure that all of your data is secured properly.

 

Requirements

Today I read an interesting article on software requirements from http://ericsink.com/articles/Requirements.html.  Do read it when you have the time.

 

Here is an excerpt of the article:

 

What is a Spec?

A spec is short for “specification”.  A spec is something that describes what a piece of software should do.

For the moment I am being deliberately broad and inclusive.  If you are experienced in software project management, you probably have something very specific in mind when think of the words “spec” or “requirement”.  In fact, it is possible that you are not willing to acknowledge that something is a spec unless it matches up fairly well with your image of same.  That’s okay.  Just stay with me.

For now, I’m saying that anything that is a “description of what a piece of software should do” can be considered a spec.  This may include:

  • A document
  • A bunch of 3×5 note cards
  • A spreadsheet containing a list of features

I am currently involved in a project where my role is “The Walking Spec”.  In other words, I am the person who mostly knows everything about how this piece of software should mostly behave.  When people need a spec, they ask me a question (footnote 2).  I’m not saying that I am a good spec, but I don’t think I’m the worst spec I have ever seen, and I am certainly better that no spec at all.  🙂

Seriously, a spec needs to be in a form which is accessible to more than one person.  It needs to be written down, either in a computer or on paper.

But how?

…..

 

on another note Eric Sink wrote:

Changing Requirements

If a project gets all the way to completion with bad requirements, the likelihood is that the software will be disappointing.  When this happens, the resulting assignment-of-blame exercise can be fun to watch.  From a safe distance.

More often, during the project somebody notices a problem with the requirements and changes them along the way.

Marketing:              By the way, I forgot to mention that the application has to be compatible with Windows 95.

Development:         Windows 95?  You’re kidding, right?  People stopped using Win95 over a decade ago!

Marketing:              Oh, and Mac OS 7.6 too.

Development:         What?  We’re building this app with .NET 3.0 and we’re already 40% done!

Marketing:              You’re half done?  That’s great!  Oh, and I forgot to mention we need compatibility with the Atari ST.

Development:         Why didn’t you tell us this before we started?

Marketing:              Sorry.  I forgot.  It’s no problem to change it now, right?

Changing requirements mid-project can be expensive and painful.

However, it is very rare to have a project where all the requirements are known and properly expressed before development begins.  So, it behooves us to prepare for changes.  If we choose a development process which rigidly requires a perfect spec before construction can begin, we are just setting ourselves up for pain.  We need to be a bit more agile.

Effort Estimation for Software Development

 

Software effort estimation has been an important issue for almost everyone in software industry at some point. Below I will try to give some basic details on methods, best practices, common mistakes and available tools.

Why is proper effort estimation important?
Effort estimation is essential for many people and different departments in an organization. Also, it is needed at various points of a project lifecycle.

Presales teams need effort estimation in order to cost price custom software and project managers need it in order to allocate resources and time plan a project.
Usually, software development is priced based on the person days, it requires in order to be built, multiplied by a daily person day rate. Without effort estimation pricing is impossible.

Also, in order to plan a project and inform the project owners about deadlines and milestones you have to know how much effort the job requires.

Finally, initial effort estimation shows if you have the resources to finish the project within customer or project owner predefined time limits, based on your available man power.

Accuracy
Effort estimation accuracy depends on available information. Usually, you have less information before you start the project (presales) and you have more information while working in the project.

Most of the times, you can have more accurate effort estimation after requirement analysis. However, initial effort estimation at early project stages is sometimes more important. E.g. you give a financial offer based on early stage effort estimation. The price you will give will probably bind you for the whole project, so it is important to have a good estimation from the beginning.

Although it is obvious that accurate effort estimation is crucial, most of the times people fail to predict well. Actually it is amazing how often and how much effort estimation goes wrong.
Approximately 40% of industry software projects that get cancelled are cancelled due to, partly or completely, failures in effort estimation.

It is a common fact that the larger the project is, the more essential is to have a good estimation and, at the same time, the more difficult is to have one.

Have in mind that both low effort estimation and high effort estimation cause troubles, and make the project take longer to complete.

Who should do effort estimation and who is interested in it
Those responsible for effort estimations are usually the Project Managers. Depending on the chosen effort estimation method, they can estimate alone or with expert advice from developers, designers and testers.

Other people that need most the effort estimation are project owners and sales. Most of the times, your effort estimation may be challenged by sales or management teams.
Sales people want low cost. This means low effort estimation; you want more resources and your most valuable resource might be time. Also, you know that everyone will be happy if you finish earlier and none if you finish later. In addition, developers and designers when giving estimates have in their minds the possibility to be pressed to finish tasks in strict deadline… and, for sure, they don’t want the pressure, so, most commonly, they will take the worst case when estimating.

So here is a conflict. How can you manage it? Well, there is no such thing as a global solution. If you can come up with effort estimation of 100 person days and sales say that it is too much, try to explain and break effort in small parts. In this way, people realize better the work to be done. If they insist, try to find if you really have put too much of something, try to see if there are things that can be done easier without losing specifications or requirements. Finally, you can always say: “This is my estimate, but you can sell it as much as you want”.
How you are effort estimating
There are a number of methods that are used for effort estimation. All of them have pros and cons and all depend on the information the effort estimator has, his experience and his judgement. Below I will explain most of these.

There are three main approaches for effort estimation:
Expert estimation: An expert on the subject of effort gives judgement on this.
Formal estimation model: Using a proper model you feed the system with proper data to get some estimation.
Combination-based estimation: The estimation arrives with a mixture of both expert estimation and formal estimation procedures.

Each approach has one or more methods. Below you will find the most common ones.

Work Break-Down Structure
This seems to be the most common method. Using this method you break down the project to the small parts of works, tasks. Then, you estimate the effort for every task.
This is an Expert Judgement method and it comes with two flavours: Three Point System and Delphic Oracle.
Using the Three Point method an expert gives 3 estimations for every task. Best Case, Most Probable, Worst Case. The effort for every task is the outcome of a weighted average of the three estimations where the most probable effort gets a higher weight.
Delphic Oracle means that we get 3 different people to estimate the task effort. The final task effort is the average.

Analogy / Comparison
It is a Formal Estimation Method. With this method we are searching for projects with similar characteristics and we choose the closest to the one we are estimating.
Analogy based estimation is another technique for early life cycle macro-estimation. Analogy based estimation involves selecting one or two completed projects that most closely match the characteristics of your planned project. The chosen project is then used as the base for your new estimate.
Comparison based estimation involves considering the attributes of the project to be estimated, selecting projects with similar attributes and then using the median values for effort, duration etc. from the selected group of projects to produce an estimate of project effort.
A recent method is Weighted Micro Function Points (WMFP). It is a modern software sizing algorithm invented by Logical Solutions. As many ancestor measurement methods use source lines of code (SLOC) to measure software size, WMFP uses a parser to understand the source code breaking it down into micro functions and derive several code complexity and volume metrics.

COCOMO II
It is another Formal method that uses various parameters and a defined formula to estimate effort (parametric model)
COnstructive COst MOdel II (COCOMO II) is the latest major extension to the original COCOMO (COCOMO 81) model published in 1981.
COCOMO accepts as input quantative and qualitative weighted characteristics and produces effort estimation.
Group Estimation (Wideband Delphi)
The Wideband Delphi estimation method is a consensus-based technique for estimating effort. People in team meeting submit anonymous effort estimation forms and then discuss the points where estimations vary a lot.

Estimating Size
Most formal methods require somehow defining project size. Most of them use SLOC (single lines of code) or Unadjusted Function Points (e.g. database tables, input screens), while expert judgement methods focus on breaking down the project to small part that are easy to directly predict effort.
In order to use formal methods and since, especially at early stages, you don’t know SLOCs, you should use your experience on past projects and on a good analysis of the requirements.
If you keep estimating and then check the actual size with your initial estimate you will become more and more accurate with project sizing.

Best Practices for Effort Estimation
Below I have summarized some of the best practices someone should follow for better effort estimation (at least what in my opinion should work better)

-If you use work break down structure, use both Three Point Estimation and Delphic oracle and see what works better for your organization.
-Identify the right people to do estimations. Some may prove pessimistic while others very optimistic.
-Use more than one methods and compare the results (assuming you have the time to do so).
-Usually the people that will have to develop the project will be pessimistic.
-People that will not have to work on the project are most of the times optimistic.
-Keep all your estimates and compare them with actual results in order to calibrate your models.
-Gather as much information about requirements as you can before you start estimation.
-Even if you have a requirements document, you may need to decompose the features into smaller features that can be compared to past experiences.
-Do not get into very little/tiny details. The further you go at early stage estimation, the more uncertainty will come and a less accurate estimation will arrive (overfitting).
-Don’t put someone with no experience at all on this type of projects to estimate because you will just take a WAG (Wild-Ass Guess) -estimation, and the hope of the estimator that he is not wrong. Given the rarity of being punished for under-promising and over-delivering this WAG tends to be a massive over-estimation.
-If sales and management have a strong opinion on your estimation, use their method. Price-to-Win: Ask what price will get the customer and see what effort allows this to give to the project. Break this effort to the tasks and see how feasible it is.
-Have in mind that the productivity falls as the project becomes bigger.
-Usually you need 20% of time for requirements, 25% for testing, 40% for design and 15% for coding. If you spend more effort in one step, the most probable is that you need more effort for all the rest based on their percentage in total project effort.
-Check if you can use group-based estimation that helps the entire team arrive at a shared understanding of what each feature/story/etc. is supposed to do. This is also good in order to keep a high bus factor.

More common Mistakes
Steve McConnell, in “10 Deadly Sins of Software Estimation,” mentions 10 mistakes (sins) on estimating scope. I will just mention all of these here although some already discussed.

1. Do Not Confuse estimates with targets
2. Do not say yes when really meaning no
3. Do not commit too early with lots of uncertainties
4. Do not assume underestimation has no impact on project result
5. Do not estimate in the “impossible zone“ (“Impossible zone” is a compressed schedule with a zero chance of success)
6. Do not overestimate savings from new tools or methods (Payoff is less than expected)
7. Do not use only one estimation technique
8. Use estimation software
9. Include risk impact
10. Do not provide off-the-cuff estimates (treat estimation of a big project as a mini project)

Tools
There are many tools available to assist you with effort estimation.
You can even make your own excel spread sheets for counting effort using work break down structure. But you can try first the tools available.

First of all, I would recommend trying the free Orange Effort Estimation Tool

Orange Effort Estimation
The tool is web-based, therefore it can be used from anywhere with a web browser. There is a server part and a client side. All calculations and data are performed/stored in one central server/database. The client communicates using SOAP web services. The client side code is available, as open source, to everyone to download.

This tool enables software development effort estimation using 5 different methods. All industry standard methods are used.
COCOMO II, Work Breakdown Estimation, Analogy / Comparison Estimation, Custom modular estimation for WEB and Mobile

The tool can be feeded with custom modules estimations for use in future project estimations and, also, allow the feeding of data for analogy/comparison effort.
With you ID you can save and edit your estimations. It has a lot of comments for most fields in order to help you. The tool, based on data entered, may use all methods and give a combined estimation or use any combination, even just one of them to provide effort estimation.
In addition, some basic suggestions are available in case the actual effort is available to help for better effort estimation.

Learm more an get the tool at http://tecorange.com/orange-effort-estimation-tool-software-development

Effort Estimation Mobile Application

Effort Estimation Mobile Application can be used by sales people, developers, designers, project managers and actually anyone that can capture basic requirements. Its a tool that everyone should have.

It is available as a mobile application for all platforms (iOS, Android, Windows Phones and as Mobile Web App). Main features are:

Easy to use, feel comfortable from first minute
Accepts easy to understand and capture inputs, a tool not only for Developers or IT Experts
Optimized Algorithm for Web and Mobile Projects
Save Projects for Future Reference
E-mail Estimations
Easily Calibrate the Model to fit your needs
Instructions in input screens
Includes usage instructions and example cases
Uses local storage, data are stored in your browser-computer-mobile device
Works Offline
W3C Validated HTML5, CSS

Agile COCOMO II
Agile COCOMO II is a web-based software cost estimation tool that enables you to adjust your estimates by analogy through identifying the factors that will be changing and by how much.
http://sunset.usc.edu/cse/pub/research/AgileCOCOMO/AgileCOCOMOII/Main.html

Bournemouth University – ANGEL Project
Estimation by analogy is the focus of a research project being undertaken by the Empirical Software Engineering Research Group (ESERG) at Bournemouth University. A brief bibliography and the downloadable ANGEL tool are provided.
http://dec.bmth.ac.uk/ESERG/ANGEL/

Costar and SystemStar
Costar is an automated implementation of COCOMO II developed by SoftStar Systems. SystemStar, an automated implementation of COSYSMO.
http://www.SoftstarSystems.com/

KnowledgePLAN
SPR KnowledgePLAN is a software tool designed to help plan software projects. With KnowledgePLAN you can size your projects and then estimate work, resources, schedule and defects. You can evaluate project strengths and weaknesses to determine their impact on quality and productivity.
http://www.spr.com/spr-knowledgeplanr.html

SLIM
QSM’s Software LIfecycle Management (SLIM) tools support decision making at each stage of the software lifecycle: estimating, tracking, and benchmarking and metrics analysis. Each tool is designed to deliver results, whether used as a standalone application or as part of QSM’s integrated suite of software metrics tools.
http://www.qsm.com/products.html

Epilogue
Effort estimation requires knowledge, experience and judgment, along with trial and error that will fine tune your methods. This article is a high level introduction, every method especially model methods use complicated formulas and calculations to predict. Also more methods are available (e.g. Planning Poker, a game like method). Effort estimation is essential and important, but should not be the most important thing, if accuracy in estimation proves to be of HUGE importance and there is a lot of pressure around it, then it might be a project that you should not undertake.

For moving from effort to software costing read the Cost of Software article.

What is important is to find out what suits your organization and fine tune avoiding common mistakes. If you keep failing revise, even if your expectations say no. Effort estimation is just another plan, but real results will show you the way.

“When the territory and the map disagree, believe the territory.” Swiss Army Manual