Monday, 29 February 2016

About Data Mining

Data Mining

Data Mining is an analytic process designed to explore data (usually large amounts of data - typically business or market related - also known as "big data") in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. The ultimate goal of data mining is prediction - and predictive data mining is the most common type of data mining and one that has the most direct business applications. The process of data mining consists of three stages: (1) the initial exploration, (2) model building or pattern identification with validation/verification, and (3) deployment (i.e., the application of the model to new data in order to generate predictions).
Stage 1: Exploration. This stage usually starts with data preparation which may involve cleaning data, data transformations, selecting subsets of records and - in case of data sets with large numbers of variables ("fields") - performing some preliminary feature selection operations to bring the number of variables to a manageable range (depending on the statistical methods which are being considered). Then, depending on the nature of the analytic problem, this first stage of the process of data mining may involve anywhere between a simple choice of straightforward predictors for a regression model, to elaborate exploratory analyses using a wide variety of graphical and statistical methods (see Exploratory Data Analysis (EDA)) in order to identify the most relevant variables and determine the complexity and/or the general nature of models that can be taken into account in the next stage.
Stage 2: Model building and validation. This stage involves considering various models and choosing the best one based on their predictive performance (i.e., explaining the variability in question and producing stable results across samples). This may sound like a simple operation, but in fact, it sometimes involves a very elaborate process. There are a variety of techniques developed to achieve that goal - many of which are based on so-called "competitive evaluation of models," that is, applying different models to the same data set and then comparing their performance to choose the best. These techniques - which are often considered the core of predictive data mining - include: Bagging (Voting, Averaging), BoostingStacking (Stacked Generalizations), and Meta-Learning.
Stage 3: Deployment. That final stage involves using the model selected as best in the previous stage and applying it to new data in order to generate predictions or estimates of the expected outcome.
The concept of Data Mining is becoming increasingly popular as a business information management tool where it is expected to reveal knowledge structures that can guide decisions in conditions of limited certainty. Recently, there has been increased interest in developing new analytic techniques specifically designed to address the issues relevant to business Data Mining (e.g., Classification Trees), but Data Mining is still based on the conceptual principles of statistics including the traditional Exploratory Data Analysis (EDA) and modeling and it shares with them both some components of its general approaches and specific techniques.
However, an important general difference in the focus and purpose between Data Mining and the traditional Exploratory Data Analysis (EDA) is that Data Mining is more oriented towards applications than the basic nature of the underlying phenomena. In other words, Data Mining is relatively less concerned with identifying the specific relations between the involved variables. For example, uncovering the nature of the underlying functions or the specific types of interactive, multivariate dependencies between variables are not the main goal of Data Mining. Instead, the focus is on producing a solution that can generate useful predictions. Therefore, Data Mining accepts among others a "black box" approach to data exploration or knowledge discovery and uses not only the traditional Exploratory Data Analysis (EDA) techniques, but also such techniques as Neural Networks which can generate valid predictions but are not capable of identifying the specific nature of the interrelations between the variables on which the predictions are based.

Database Tools and Techniques

Introduction

The amount of data being collected in databases today far exceeds our ability to reduce and analyze data without the use of automated analysis techniques. Many scientific and transactional business databases grow at a phenomenal rate. A single system, the astronomical survey application SCICAT, is expected to exceed three terabytes of data at completion [4]. Knowledge discovery in databases (KDD) is the field that is evolving to provide automated analysis solutions.
Knowledge discovery is defined as ``the non-trivial extraction of implicit, unknown, and potentially useful information from data'' [6]. In [5], a clear distinction between data mining and knowledge discovery is drawn. Under their conventions, the knowledge discovery process takes the raw results from data mining (the process of extracting trends or patterns from data) and carefully and accurately transforms them into useful and understandable information. This information is not typically retrievable by standard techniques but is uncovered through the use of AI techniques.
KDD is a growing field: There are many knowledge discovery methodologies in use and under development. Some of these techniques are generic, while others are domain-specific. The purpose of this paper is to present the results of a literature survey outlining the state-of-the-art in KDD techniques and tools. The paper is not intended to provide an in-depth introduction to each approach; rather, we intend it to acquaint the reader with some KDD approaches and potential uses.

Background

Although there are many approaches to KDD, six common and essential elements qualify each as a knowledge discovery technique. The following are basic features that all KDD techniques share (adapted from [5] and [6]):
  • All approaches deal with large amounts of data
  • Efficiency is required due to volume of data
  • Accuracy is an essential element
  • All require the use of a high-level language
  • All approaches use some form of automated learning
  • All produce some interesting results
Large amounts of data are required to provide sufficient information to derive additional knowledge. Since large amounts of data are required, processing efficiency is essential. Accuracy is required to assure that discovered knowledge is valid. The results should be presented in a manner that is understandable by humans. One of the major premises of KDD is that the knowledge is discovered using intelligent learning techniques that sift through the data in an automated process. For this technique to be considered useful in terms of knowledge discovery, the discovered knowledge must be interesting; that is, it must have potential value to the user.
KDD provides the capability to discover new and meaningful information by using existing data. KDD quickly exceeds the human capacity to analyze large data sets. The amount of data that requires processing and analysis in a large database exceeds human capabilities, and the difficulty of accurately transforming raw data into knowledge surpasses the limits of traditional databases. Therefore, the full utilization of stored data depends on the use of knowledge discovery techniques.
The usefulness of future applications of KDD is far-reaching. KDD may be used as a means of information retrieval, in the same manner that intelligent agents perform information retrieval on the web. New patterns or trends in data may be discovered using these techniques. KDD may also be used as a basis for the intelligent interfaces of tomorrow, by adding a knowledge discovery component to a database engine or by integrating KDD with spreadsheets and visualizations.

KDD Techniques

Learning algorithms are an integral part of KDD. Learning techniques may be supervised or unsupervised. In general, supervised learning techniques enjoy a better success rate as defined in terms of usefulness of discovered knowledge. According to [1], learning algorithms are complex and generally considered the hardest part of any KDD technique.
Machine discovery is one of the earliest fields that has contributed to KDD [5]. While machine discovery relies solely on an autonomous approach to information discovery, KDD typically combines automated approaches with human interaction to assure accurate, useful, and understandable results.
There are many different approaches that are classified as KDD techniques. There are quantitative approaches, such as the probabilistic and statistical approaches. There are approaches that utilize visualization techniques. There are classification approaches such as Bayesian classification, inductive logic, data cleaning/pattern discovery, and decision tree analysis. Other approaches include deviation and trend analysis, genetic algorithms, neural networks, and hybrid approaches that combine two or more techniques.
Because of the ways that these techniques can be used and combined, there is a lack of agreement on how these techniques should be categorized. For example, the Bayesian approach may be logically grouped with probabilistic approaches, classification approaches, or visualization approaches. For the sake of organization, each approach described here is included in the group that it seemed to fit best. However, this selection is not intended to imply a strict categorization.

Probabilistic Approach

This family of KDD techniques utilizes graphical representation models to compare different knowledge representations. These models are based on probabilities and data independencies. They are useful for applications involving uncertainty and applications structured such that a probability may be assigned to each ``outcome'' or bit of discovered knowledge. Probabilistic techniques may be used in diagnostic systems and in planning and control systems [2]. Automated probabilistic tools are available both commercially and in the public domain.

Statistical Approach

The statistical approach uses rule discovery and is based on data relationships. An ``inductive learning algorithm can automatically select useful join paths and attributes to construct rules from a database with many relations'' [8]. This type of induction is used to generalize patterns in the data and to construct rules from the noted patterns. Online analytical processing (OLAP) is an example of a statistically-oriented approach. Automated statistical tools are available both commercially and in the public domain.
An example of a statistical application is determining that all transactions in a sales database that start with a specified transaction code are cash sales. The system would note that of all the transactions in the database only 60% are cash sales. Therefore, the system may accurately conclude that 40% are collectibles.

Build your app using tools

1. AppMakr
This is a browser-based platform designed to make creating your own iPhone app quick and easy. You can use existing content and social networking feeds to produce a variety of different approaches for your app. It includes features such as push notifications, location- aware GeoRSS, custom CSS and JavaScript capabilities.
The tool is free to use, but a $79 monthly fee per app subscription gets you access to more advanced features. AppMakr works on the iOS, Android and Windows operating systems.
2. GENWI
This tablet and smartphone publishing platform allows you to create and manage your presence on all popular mobile devices, including iPad, iPhone, Android and HTML5 apps. It delivers rich graphics, photos, video, audio and other forms of interactivity.
GENWI also enables you to revise your apps as often as you like. What's more, apps can include various revenue-generating capabilities for businesses, like ads, coupons and in-app subscriptions. After a three-month trial, pricing varies by features included.
3. Mippin
One of the greatest strengths of the London-based Mippin platform is its ease of use. It allows you to create apps for Android, iOS and Windows, and provides flexibility in designing the app. You can even have Mippin distribute your app for you to the iTunes, Android, Windows and Amazon stores. Native apps can cost as much as $999 per year.
4. MobBase
Are you a singer or in a band? If so, then MobBase is for you. This app builder allows you to use an RSS feed to keep your fans up to date on band news and events, lets you upload tracks for fans to listen to while browsing the apps and makes it easy for fans to find information on upcoming shows, buy tickets and get directions.
Activation of an iOS app requires a one-time fee of $250 (includes $99 to set up an iOS developer account). Android activations run $20. Hosting fees range from $15 to $65 a month. Additional fees for support services are not included.
5. MobiCart
Do you have an e-commerce store that you'd like to take into the mobile sphere? Then MobiCart might be what you're looking for. It links up with PayPal to allow any business or consumer with an e-mail address to securely, conveniently and cost-effectively send and receive payments online.
Basic plans cost $15 per month. "Pro" plans will cost $49 per month.
6. MyAppBuilder
For just $29 a month, MyAppBuilder will create an iPhone or Android app for you. All you have to do is provide content (videos, books, etc.) and their pros will take it from there. You don't need a technical background to develop an app with MyAppBuilder. They'll even take care of the hassle of uploading it to the app store for you.
MyAppBuilder says can create two apps per month once you register and pay a $29 monthly membership fee.

Software Testing Responsibilities

                            Story image for software testing from SDTimes.com
  • In the planning and preparation phases of the testing, testers should review and contribute to test plans, as well as analyzing, reviewing and assessing requirements and design specifications. They may be involved in or even be the primary people identifying test conditions and creating test designs, test cases, test procedure specifications and test data, and may automate or help to automate the tests.
  • They often set up the test environments or assist system administration and network management staff in doing so.
  • As test execution begins, the number of testers often increases, starting with the work required to implement tests in the test environment.
  • Testers execute and log the tests, evaluate the results and document problems found.
  • They monitor the testing and the test environment, often using tools for this task, and often gather performance metrics.
  • Throughout the testing life cycle, they review each other’s work, including test specifications, defect reports and test results.

Developments in internet

Technical Specifications

The Internet was built on the premise of interoperability based on independent implementations of common specifications: Internet specifications. By focusing on interoperability for passing traffic between networks, Internet standards describe the protocols on the wire without prescribing device characteristics, business models, or content.
The value of this building-block approach is seen in the range and depth of innovation and development in Internet technologies and services. New components-whether networks, services, or software-work seamlessly with existing deployments, as long as all of the pieces correctly implement applicable standards on the network. This makes the field of possible innovations virtually limitless.
Apart from the focus on wire protocols for interoperability, one might say that successful Internet standards share certain characteristics, as follows:
  • Freely accessible specifications: All of the relevant written specifications required to implement the standard are available without fee or requirement of other contractual agreement such as a nondisclosure agreement or license.
  • Unencumbered: It is possible to implement and deploy technology based on the standard without undue licensing fees or restrictions.
  • Open development: In order to have relevance in the resulting standard, it is critical that all parties working with impacted technologies be able to participate in and learn from the history of the development of an Internet standard.
  • Always evolving: As the Internet itself continues to evolve, new needs for interoperability get identified. Therefore, the standards that support the Internet must evolve to address identified technical requirements.
Again, these characteristics may be familiar to IETF participants, but they are important to articulate and share.

Deployment Realities: Awareness and Feedback

For newly developed building blocks to work seamlessly with existing deployments, they have to be placed based on some level of awareness of actual deployment realities. It's not enough to posit a desirable outcome; feedback from past successes and failures, deployment conditions, and expectations of uptake are required throughout the development of new specifications. For example, the IETF encourages this through open participation by all engineers with relevant expertise, as well as the formation of working groups dedicated to operational aspects, such as v6ops and dnsop.
Of course, while the technical specification process views deployment realities as input, broader deployment discussions are important in the identification of critical needs too. That is, operational experience with network usage, new or updated protocols, best practices, and so on are things that are best articulated in groups of deployment experts: people with operational expertise. This is where regional operator group meetings, such as such as NANOG, RIPE, and APRICOT, are key for network operations activities. More-regional and more-focused network operator groups can draw experts to discuss local issues as well as global issues in context. It's especially valuable to get cross-pollination between these activities and technical specification activities.
Sometimes it's important to bring back the deployment reality issues to the IETF in a broader context than specific work in a particular working group. This is often the driver behind the Internet Architecture Board's technical plenary topic selections. The session on network neutrality at IETF 75 (see Plenary, page 4) provided just such an opportunity; it was a chance to hear the perspectives of decision makers (governments and regulators) that are outside the traditional operational network realm. It's also a motivation behind the Internet Society's recent media briefing panels, such as the Securing the DNS panel (see page 12).

Looking to the Future: Research

When it comes to gathering data, examining issues, and seeking answers without the restrictions of established environments, organized research is key. The IRTF's work with the IETF can serve as an important bridge between the world of research activities and the realm of technical specification. This was especially well illustrated in the case of the Host Identity Protocol, which has had concurrent research and working groups examining various aspects of development and specification.
As noted earlier, there are a number of clean-slate research programmes under way around the world, many of which focus on considering known issues-such as security, congestion control, and routing-within new network developments completely independently of the deployed Internet. The research will yield interesting answers to the important how and what-if questions. The next question is, How will the world make use of the answers? It could be through the blanket deployment-from scratch-of the new networks that those research activities propose from those clean slates. However, that could not happen overnight. Alternatively, the lessons learned through those research activities may well inform current Internet building-block developments, because strong evidence of the value of a different direction provides impetus to get development and deployment over hurdles that might otherwise have seemed insurmountable. That means that research activities must be discussed and shared equally within the processes for technical specification and deployment feedback.
Fifteen years ago, the percent of researchers among active IETF participants was higher than it is today. Perhaps that's not surprising, given that the core Internet then still featured a large number of research networks and nodes operated by academic and research institutions. Nevertheless, there are still researchers who get involved in IETF activities, as witnessed by the level of attendance at an Internet Society cross-regional (Europe, North America, and Asia) future Internet researcher luncheon, described here. In a discussion of the challenges to future Internet research activities, it became clear that one of the significant challenges involves getting a coherent research agenda that is useful for framing funded research activities across regions. That's one way the specification and deployment activities of the Internet could feed back into the research world. Likewise, highlighting the most promising research results from around the world to the operational and standardization communities will help close the loop on the cycle of activities that constitute this model of Internet development.

Saturday, 27 February 2016

Programming Languages for web development

1.   JavaScript

JavaScript is one of the most popular and dynamic programming languages used for creating and developing websites. This language is capable of achieving several things including controlling the browser, editing content on a document that has been displayed, allowing client-side scripts to communicate with users and also asynchronous communication. It was developed by Netscape and borrows a lot of its syntax from C language. JavaScript is used very widely and effectively in creating desktop applications as well as for developing games.
One of the best things about JavaScript for you as a developer or a website owner is that this is one of the few programming languages that are accepted and supported by all the major browsers without the need of any compilers or plug-ins. It can also be worked with on platforms that are not web-based, for example-desktop widgets and PDF docs. This is a multi-paradigm language which means that it has a combination of features. Also, JavaScript supports functional and object-oriented programming styles.
The features of a language define the way it will work, the way it responds, how easy is its code and what it can achieve. The following are some of the main features of JavaScript programming language for your reference:
  • Structured – JavaScript is a highly structured language with a proper and planned syntax that has been derived from C. This language too has a function scoping by it lacks block scoping, unlike C. It too differentiates between statements and expressions, just like the fundamental C web programming platform.
  • Dynamic – The types in JavaScript are not related with variables but with values. This is a dynamic programming language that enables you to test the type of an object in many different ways. Also, this programming language is object-oriented where all the objects are associative arrays.
  • Functional – All functions in JavaScript are objects and are all first-class. They are associated with their own functions as well as characteristics. For example, a function within a function is called a nested function whereas this language also supports anonymous function.

2.   Java

Java is yet another highly popular and widely used language that you can consider for web development. This language is an object-oriented, class-based and concurrent language that was developed by Sun Microsystems in the 1990s. Since then, the language continues to be the most in-demand language that also acts as a standard platform for enterprises and several mobile and games developers across the world. The app has been designed in such a way that it works across several types of platforms. This means that if a program is written on Mac Operating system then it can also run on Windows based operating systems.
Java, when it was designed originally, was developed for interactive television, but the developers realized that this language and technology was way too forward for this industry. It was only later that it was incorporated into the use it serves today.
Every language is created with a certain mission, goal or objective in mind. The following are the 5 major principles or goals that were kept in mind during the creation of this language:
  • It must be a secure and robust programming language
  • It must be an object-oriented, simple language which becomes familiar soon.
  • It must be capable of being implemented and executed with high performance.
  • It must be threaded, dynamic and interpreted.
  • It must be portable and architecture-neutral.

3.   Python

Python is a highly used and all-purpose programming language which is dynamic in nature. Being dynamic in nature means that you as a developer can write and run the code without the need of a compiler. The design of the language is such that it supports code readability which means that its syntax is such that only a few lines of codes are needed to express a point or a concept. This concept of code readability is also possible in the case of Java and C++, etc. This is a high-level or advanced language that is considered easy for beginners to understand and learn.
Some of the apps that are powered by Python are Rdio, Instagram, and Pinterest. Besides this, some other web platforms that are supported by Python are Django, Google, NASA, and Yahoo, etc. Some of the other features of this language include automatic memory management, large library, dynamic type system and support of many paradigms.
Python works on a core philosophy and follows its main principles in all seriousness. The language was designed with the aim of making it highly extensible. This means that the language can easily be incorporated or embedded in existing applications. The goal of the developers of this language was to make it fun to use one. The developers worked on the language in such a way that it could reduce upon premature optimization. Here’s a look at some of the principles that have been summarized for you:
  • Readability is important
  • Complex is better than complicated-
  • Beautiful is better than ugly
  • Simple is better than complex
  • Explicit is better than implicit

4.   CSS

CSS or Cascading Style Sheets is rather a markup language. When paired with HTML, CSS allow a developer to decide and define how a web page or a website will eventually look or how it will appear to the visitors of the web platform. Some of the elements which CSS has an impact on include font size, font style, the overall layout, the colors and other design elements. This is a markup language that can be applied to several types of documents including Plain XML documents, SVG documents as well as XUL documents. For most websites across the world, CSS is the platform to opt for if they need help to create visually attractive webpages and finds use not just in the creation of web applications but also mobile apps.
The language’s syntax is pretty similar to that of HTML and XHTML, which work well in synchronization and combination of one another. The Style sheets included in CSS consist of a selector and a declarator. The simple syntax of the language uses several English language words to define the styling properties.

5.   PHP

The term ‘PHP’ is used to define PHP Hypertext Processor language that is a free server-side scripting language that has been designed for not just web development but also as a general-purpose programming platform. This is a widely used language that was created in the year 2004 and now powers over 200 million websites worldwide. Some popular examples of websites powered by this platform include FacebookWordPress, and Digg.com.
PHP is an interpreted script language which means that it is usually processed by an interpreter. For this reason, the language is most suitable for server-side programming that have server tasks being repeatedly performed when the website development process is on.
The following are some more points that shall help you understand the language better:
  • PHP is an open source language and fast prototyping language.
  • This language is compatible with UNIX based OS as well as Windows OS.
  • Some industries where PHP is mostly used include startup businesses, advertising apps, and small software organizations as well as media agencies.
  • The language can be embedded in HTML directly.

6.   Ruby

Developed in the year 1993, Ruby is a dynamic programming language that is used for the creation or programming of mobile apps and websites. The language successfully balances imperative programming with functional programming and is a highly scalable language. This open source platform is not only simple to understand but also easy to write. But if you are a developer who wants to learn Ruby, then you will also have to equip yourself with the knowledge of Ruby on Rails or Rails which is another framework which can help you make it interesting to deal with Ruby. For those who are interested in creating small business software and for those who are into the field of creative designing, Ruby is the perfect programming language.
During its development, the idea was to come up with a language that was more productive in terms of programming and has a concise and simple code. Ruby is mostly used in the web servers where there is a lot of web traffic. Some examples of platforms that make use of this programming language include Hulu, Twitter, and Scribd, etc.

7.   C++

C++ is a general purpose, well compiled and case sensitive web programming language that is not only imperative but also offers facilities for low-level memory manipulation. Since the language makes use of both low-level features as well as the feature of high-level languages, it is considered as a middle-level language. This language was developed by Bjarne Stroustrup starting in the year 1979 and was later enhanced and renamed in 1983. Since C++ is an object oriented language, it supports the 4 principles of object oriented development including polymorphism, encapsulation, inheritance, and data hiding.
C++ is similar to C language in a lot of ways and is in fact the superset of C. This means that any program of C language is a program of C++ programming language. The language has many technical details, but the key to learning this language for you is not to get lost in these details but rather concentrate on its concepts.
Like any other language, this language too based on a philosophy and has certain elements that make it what it really is. C++ consists of three important parts, and they are given as follows:
  • The standard library of C++ is capable of giving a rich combination and gamut of features such as strings and manipulating sets, etc.
  • The standard template library or STL is capable of giving a rich set of methods manipulating data structures and other elements.
  • The core C++ language has the capability of giving the building elements like literals, data types, and variables.

About software developers

Job Duties for Software Developers

Generally, software developers write the computer programs used for everything from the systems that allow computers to run properly to the latest software applications for mobile devices. It’s an expanding field that requires creative minds who want to be on the cutting edge of finding new uses for technology.
Software developers typically spend their days analyzing the needs of clients and then designing a system to meet those needs. They might also recommend software upgrades to existing systems. More detailed work comes in the form of designing the step-by-step flowcharts for computing systems that show how program code must be written in order for it to work properly.
Software developers document all of their tasks to ensure that subsequent users can diagnose and fix any problems that might arise in a system, as well perform any maintenance.

Salary and Job Outlook for Software Developers

Securing a job as a software developer requires education and training; in return, skilled developers may enjoy better-than-average compensation. The median annual wage of software developers was more than $90,500 in 2010, according to the U.S. Bureau of Labor Statistics (BLS). For developers specializing in systems software, the average rose above $94,000.
The BLS predicts a 30% increase in employment of software developers between 2010 and 2020, much faster than the 14% growth rate anticipated for all occupations.
Although there are numerous factors behind these projections, the primary underlying reason is that computer software is needed in almost every industry. The need is especially pronounced in businesses moving into mobile technology and in the healthcare industry, where records are being transferred into electronic databases.
The career and salary potential of software developers is influenced by local market conditions, as well as by work experience, education and other factors.

Software Developer Education and Training

Getting into this in-demand profession typically requires a bachelor’s degree, most likely in computer science or a related field. Some software developers have a degree in mathematics, according to the BLS.
As with many occupations, attaining an advanced degree can result in software developers securing jobs with more responsibility and higher pay.
Whichever degree is pursued, aspiring software developers should primarily focus on learning the skills needed to write, implement and maintain software. However, students may also want to have an understanding of the industry in which they wish to work – healthcare, for example, or finance.
degree in business administration with an emphasis on computer science is one possible pathway to a career in software development, as it provides a solid business background as well as specific technical training.

Career Paths for Software Developers

Generally, software developers find work in one of two main areas: writing code for software or writing code for computing systems.
Application software developers design systems for consumer applications, such as those used in games, according to the BLS. Depending on client need, they may develop custom-made software for consumer applications or design databases for organizations.
Systems software developers design computing systems, including the user interface. These can include systems used within companies as well as operating systems for electronics such as mobile phones.
Servicemembers and veterans preparing to transition to civilian life may find the job of software developer shares skills and attributes with numerous military occupational specializations.

Advantages of open source resources

What are the advantages of open source software?

1. It’s generally free – it has been estimated that open source software collectively saves businesses $60 billion a year. These days for virtually every paid for proprietary software system you will find an open source version.
2. It’s continually evolving in real time as developers add to it and modify it, which means it can be better quality and more secure and less prone to bugs than proprietary systems, because it has so many users poring over it and weeding out problems.
3. Using open source software also means you are not locked in to using a particular vendor’s system that only work with their other systems.
4. You can modify and adapt open source software for your own business requirements, something that is not possible with proprietary systems.

Any disadvantages?

1. Because there is no requirement to create a commercial product that will sell and generate money, open source software can tend to evolve more in line with developers’ wishes than the needs of the end user.
2. For the same reason, they can be less “user-friendly” and not as easy to use because less attention is paid to developing the user interface.
3. There may also be less support available for when things go wrong – open source software tends to rely on its community of users to respond to and fix problems.
4. Although the open source software itself is mostly free, there may still be some indirect costs involved, such as paying for external support.
5. Although having an open system means that there are many people identifying bugs and fixing them, it also means that malicious users can potentially view it and exploit any vulnerabilities.

The practicalities

You can download open source software onto your computer system in the same way you would proprietary software. Some software providers such as Alfresco, MySQL and Ingres offer both open source versions of their software and paid-for proprietary versions.

Things to consider

Because of the way it has been developed, open source software can require more technical know-how than commercial proprietary systems, so you may need to put time and effort into training employees to the level required to use it.

Top tip

Start with the most popular open source software systems that have built up a large community of support behind them, so you have somewhere to go to if you need advice.

Friday, 26 February 2016

Benefits of Having technology in education

1. Preparing Students for the Future
First and foremost your job as an educator is to prepare your students for the future. Well in order to do so you need to incorporate mobile technology in the classroom. Working with mobile devices will not only be a part of their everyday lives as adults, but it will also be vital part of many career paths. Knowing how to appropriately use mobile devices is an important aspect in this increasingly connected world. In order to properly prepare your students for the future as they transition into the workforce, incorporating mobile technology in the classroom is key

2. Up-to-date learning

The old days of looking for information in encyclopedias are long gone. Having mobile devices in the classroom allows students instant access to the latest news, information, statistics, etc. Virtually every question they have is at their fingertips, keeping them connected with what’s going on around them and ensuring they are always well informed with the most up-to-date information.

3. Alternative to textbooks

Many textbooks are not the most relevant sources of information. Today’s generation has grown accustomed to instant, updated information. Textbooks can’t provide students with the latest information like a mobile devices can. Also, having digital textbooks on their mobile devices keeps students more organized and gives them easy access to their materials. No one likes lugging around big textbooks. Many digital textbooks are constantly updated and often more vivid, helpful, creative, and a lot cheaper than those old heavy books.

4. Learning goes outside of the classroom

By allowing mobile devices in school you can expand learning outside of the classroom. Students will not only have access to information during computer lab time (which is also becoming extinct). They can look up information from anywhere on campus. Collaboration will increase as students can use these devices as research tools during projects and group work.
Plus students love technology so they are likely to be excited about it and continue learning outside of school hours. Having those learning apps, digital textbooks, etc. on mobile devices allows them to get in extra studying and learning in during downtime. They can carry their books and notes with them at all times and have instant access to materials. If students are really excited and engaged in learning inside of the classroom, they are likely to continue learning outside of the classroom and they can do so with mobile technology.

Software Role in computers

While just about everyone uses a computer in some way, shape or form on a daily basis, there are relatively few people who understand how vitally important computer software is to the usefulness and functionality of even simple devices. From very basic items such as a digital watch, to handy innovations in cell phones, to the grand super computing behemoths that manage things such as space shuttle launches, none of these machines could function without the programming that gives them life. 

Essentially all programs and applications for computers are a set of instructions designed to create particular outcomes. A computer program is a collection of these instructions that have a common purpose. A collection of related programs to carry out coordinated computing tasks is referred to as a package. 

One great example of a package of software programs would be an application that handles accounting related tasks. Such a package would have a number of modules, or independent programs, that function together to comprise a complete package. 

For instance, the accounting package might contain a bookkeeping program, an audit program, a data base management program, a tax preparation program, a time tracking and billing program or any number of other related programs. In many cases each of these kinds of accounting programs could stand alone, but they become much more powerful when they operate together as a complete package. 

These kinds of packages that contain various modules, or programs, have become very popular because the data is shared between the various programs. This reduces the need to re-enter data and eliminates tasks of exporting and importing information from one program to the other. This not only saves time but it also significantly reduces the possibility of errors, because even when data is not re-keyed there are numerous possibilities for data to become corrupt when going through exporting and importing functions. 

All computers operate with what is known as system software, or the operating system. This programming provides the very basic instructions for how the computer interacts with the user and how the various programs and packages operate. 

Windows is the most commonly used personal computer operating system, with the Macintosh OS X operating system being the next most common system. Other operating systems include Linux and Unix, which are often used in more high-end computing situations. 

If you have computer work that needs to be done, then there are at least three components in play. The hardware is the first component which is actually what most people think of when talking about computers. The hardware includes the "box," the monitor, the keyboard, the mouse and any other physical components.

The operating system software is the second of the three components. Which operating system that runs on your computer will have a great deal to do with which programs you are able to operate and also will make a difference in how easy or difficult it is to use the computer overall. While Windows computers are more prevalent, many people say that the Macintosh computers are more user friendly because of the operating system.

Basic C trains You to Write Efficient Code

C is one of, if not the, most widely used programming languages. There are a few reasons for this. As noted programmer and writer Joel Spolsky says, C is to programming as learning basic anatomy is to a medical doctor. C is a "machine level" language, so you'll learn how a program interacts with the hardware and learn the fundamentals of programming at the lowest—hardware—level (C is the foundation for Linux/GNU). You learn things like debugging programs, memory management, and how computers work that you don't get from higher level languages like Java—all while prepping you to code efficiently for other languages. C is the "grandfather" of many other higher level languages, including Java, C#, and JavaScript.

That said, coding in C is stricter and has a steeper learning curve than other languages, and if you're not planning on working on programs that interface with the hardware (tap into device drivers, for example, or operating system extensions), learning C will add to your education time, perhaps unnecessarily. Stack Overflow has a good discussion on C versus Java as a first language, with most people pointing towards C. However, personally, although I'm glad I was exposed to C, I don't think it's a very beginner-friendly language. It'll teach you discipline, but you'll have to learn an awful lot before you can make anything useful. Also, because it's so strict you might end up frustrated like this:

10 qualities of The Perfect programmer

Every quality of a perfect programmer has a range depending on the specific problem and context. There is no absolutely perfect programmer for all the problems (at least on this planet). And the perfect programmer for particular problem should have


  1. Intellect– can understand the problem, translate and express ideas in clear and readable code, has analytical and logical mind (range: building programs for narrow well defined requests to conquering freaking complex problems in elegant way)
  2. Personality – has right mixture of personal traits (detail-oriented vs. creative, flexible vs. disciplined, sociable vs. independent)
  3. Expertise – knowledge and experience for solving client’s problems in the specific context with chosen technologies (range: a specialist in one technology to a veteran programmer with broad experience in different domains and platforms)
  4. Motivation – cares about work, shows enthusiasm, interest and love for programming (range: from working for money only to implementing interesting ideas in spare time without pay)
  5. Maturity – knows and uses sound software development principles, practices and approaches as agile, design and architecture patterns, domain-driven design, unit testing, refactoring (range: from an enthusiastic amateur to a black belt guru, who can invent new approaches on the go)
  6. Pragmatism – understands what is possible, loves simplicity and avoids over-engineering; understands business goals, keeps touch with reality and focus on what should be done (range: from a spontaneous artist to a self-driven pragmatic achiever)
  7. Cooperation – listens, accepts that other people could have better ideas, supports team goals without hidden agenda, shares ideas and knowledge and coach others (range: from idea challenger to a team coach)
  8. Communication – effectively communicates and exchanges ideas, supports knowledge and decisions about the system with clear explanations, justifications and answers (range: from a quiet introvert to a system evangelist)
  9. Potential – has professional goals, good learning skills, curiosity, adaptability and performs constant self correction (range: from person who reached his limits to the future programming star)
  10. Vision – sees the big picture, understands context, trends and people, aligns actions with team and company implicit goals, contributes into building shared vision for the software system (range: from interested in programming only to entrepreneurial visionary)

Why we need software testing

This information contributes towards reducing the ambiguity about the system. For example, when deciding whether to release a product, the decision makers would need to know the state of the product including aspects such as the conformance of the product to requirements, the usability of the product, any known risks, the product’s compliance to any applicable regulations, etc.
Software testing enables making objective assessments regarding the degree of conformance of the system to stated requirements and specifications.
Testing verifies that the system meets the different requirements including, functional, performance, reliability, security, usability and so on. This verification is done to ensure that we are building the system right.
In addition, testing validates that the system being developed is what the user needs. In essence, validation is performed to ensure that we are building the right system. Apart from helping make decisions, the information from software testing helps with risk management.
Software testing contributes to improving the quality of the product. You would notice that we have not mentioned anything about defects/bugs up until now.
While finding defects / bugs is one of the purposes of software testing, it is not the sole purpose. It is important for software testing to verify and validate that the product meets the stated requirements / specifications.
Quality improvements help the organization to reduce post release costs of support and service, while generating customer good will that could translate into greater revenue opportunities.

Utility software and Device drivers

Utility Software
Utility software helps to manage, maintain and control computer resources. Operating systems typically contain the necessary tools for this, but separate utility programs can provide improved functionality. Utility software is often somewhat technical and targeted at users with a solid knowledge of computers. If you use a computer mostly for e-mail, some Internet browsing and typing up a report, you may not have much need for these utilities. However, if you are an avid computer user, these utilities can help make sure your computer stays in tip-top shape.
Examples of utility programs are antivirus software, backup software and disk tools. Let's look at each of these in a bit more detail.
Antivirus software, as the name suggests, helps to protect a computer system from viruses and other harmful programs. A computer virus is a computer program that can cause damage to a computer's software, hardware or data. It is referred to as a virus because it has the capability to replicate itself and hide inside other computer files.
One of the most common ways to get a virus is to download a file from the Internet. Antivirus software scans your online activity to make sure you are not downloading infected files. New viruses are coming out all the time, so antivirus software needs to be updated very frequently.
Backup software helps in the creation of a backup of the files on your computer. Most computer systems use a hard disk drive for storage. While these are generally very robust, they can fail or crash, resulting in costly data loss. Backup software helps you copy the most important files to another storage device, such as an external hard disk. You can also make an exact copy of your hard disk.
Increasingly, backup software uses cloud storage to create backups. This typically means you pay a fee to use the storage space of a third party and use their backup software to manage which files are going to be backed up.
Disk tools include a range of different tools to manage hard disk drives and other storage devices. This includes utilities to scan the hard disks for any potential problems, disk cleaners to remove any unnecessary files, and disk defragmenters to re-organize file fragments on a hard disk drive to increase performance. Disk tools are important because a failure of a hard disk drive can have disastrous consequences. Keeping disks running efficiently is an important part of overall computer maintenance.

Device Drivers

device driver is a computer program that controls a particular device that is connected to your computer. Typical devices are keyboards, printers, scanners, digital cameras and external storage devices. Each of these need a driver in order to work properly.
Device drivers act as a translator between the operating system of the computer and the device connected to it. For many types of devices, the necessary drivers are built into the operating system. When you plug in a device, the operating system starts looking for the right driver, installs it and you are ready to start using the device. This is referred to as plug-and-play and is much preferred over having to manually install the correct drivers.
There are so many different devices, however, that not all of them are built into the operating system. As an alternative, the operating system can look online to find the right driver to install. Many hardware devices, however, come with the necessary drivers. For example, if you buy a printer, it may come with a CD that typically will include the correct driver. The advantage of this is that the hardware manufacturer can make sure you have the right driver for the printer.

Application software

Application software is a subclass of computer software that employs the capabilities of a computer directly and thoroughly to a task that the user wishes to perform.

This should be contrasted with system software which is involved in integrating a computer's various capabilities, but typically does not directly apply them in the performance of tasks that benefit the user.
In this context the term application refers to both the application software and its implementation.
A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system).
The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.
Typical examples of software applications are word processors, spreadsheets, and media players.
Multiple applications bundled together as a package are sometimes referred to as an application suite.
Some might bundle together a word processor, a spreadsheet, and several other discrete applications.
The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application.
And often they may have some capability to interact with each other in ways beneficial to the user.
For example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application.
User-written software tailors systems to meet the user's specific needs.
User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts.
Even email filters are a kind of user software.
Users create this software themselves and often overlook how important it is.