Credits : Gsmarena


Some users are already getting the new software version for the Honor 10 with build number COL-L29 that adds some useful and maybe not as much, new features.

The first one you will notice is the so-called Party Mode app that connects several devices via NFC to form a surround sound experience by playing one song simultaneously. The Honor View 10 also got the app, but we can’t confirm if both devices can be paired via the Party Mode app. We wonder, however, how many users actually use it, if it’s restricted to Honor devices only.

Anyway, let’s start with the helpful improvements. The default camera app got a new AI functionality – once you start taking photos, a small text message shows what kind of effect the AI is going to use, given the scenery.

The fingerprint scanner gets a little tweak as well. Now the phone detects how well you’ve recorded your fingerprint and whether it will be recognized at a certain angle. A message will pop up and advice you to delete and replace your saved finger.

Other improvements include a new theme, live wallpaper, power optimizations and May security patch.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Exclusivereportage


Global Complaint Management Software Market has been broadly presented in this report, taking into consideration its key aspects such as growth prospects, restraining factors, competitive landscape, and future opportunities. Readers have been comprehensively updated with vital statistics about the current and future share, verifiable projections about top segments and sub-segments, CAGR, and regional shares of the market. The authors of the report have applied qualitative as well as quantitative analysis to deeply evaluate the market.

The report supplies an extensive exploration of essential market dynamics and the recent trends together with pertinent market segments and factors influencing the expansion of the Complaint Management Software market. The Complaint Management Software report spotlights on the regional market, the major market players, and also the numerous market segments with an extensive assessment of several divisions along with their applications.

An exhaustive assessment of the value chain of the global Complaint Management Software market has included a deep insight about prominent end users, distributors, technological solutions, raw material suppliers, and manufacturers. The report is anticipated to be a useful guideline to apprehend the current and historical performance of the market in terms of both value and volume that could direct players to make informed decisions in their individual businesses.

What our report offers –

  • The complete Complaint Management Software market size and share analysis is done
  • The prominent industry players in the market are included
  • The opportunities for the new entrants in the market are included
  • Based on the forecast trends the market estimations are made for the strategic recommendations in the business segments
  • Detailed company profiles are included.
  • Many trends such as globalization, technology advancement, over-capacity in developed markets, market fragmentation regulation & environmental concerns, and product proliferation are covered in this report. The performance and characteristic of the global Complaint Management Software market are evaluated based on the quantitative and qualitative method to provide a clear picture of the current and future forecast trend.
  • The study objectives of this report are:

     To study and forecast the market size of Complaint Management Software in global market.

    To analyze the global key players, SWOT analysis, value and global market share for top players.

    To define, describe and forecast the market by type, end use and region.

    To analyze and compare the market status and forecast between China and major regions, namely, United States, Europe, China, Japan, Southeast Asia, India and Rest of World.

    To analyze the global key regions market potential and advantage, opportunity and challenge, restraints and risks.

    To identify significant trends and factors driving or inhibiting the market growth.

    To analyze the opportunities in the market for stakeholders by identifying the high growth segments.

    To strategically analyze each submarket with respect to individual growth trend and their contribution to the market

    To analyze competitive developments such as expansions, agreements, new product launches, and acquisitions in the market

    To strategically profile the key players and comprehensively analyze their growth strategies.

    Table of Contents:

    Complaint Management Software Market Research Report 2017

    Chapter 1: Complaint Management Software Market Overview

    Chapter 2: Complaint Management Software Market Economic Impact on Industry

    Chapter 3: Complaint Management Software Market Competition by Manufacturers

    Chapter 4: Complaint Management Software Market Production, Revenue (Value) by Region

    Chapter 5: Complaint Management Software Market Supply (Production), Consumption, Export, Import by Regions

    Chapter 6: Complaint Management Software Market Production, Revenue (Value), Price Trend by Type

    Chapter 7: Complaint Management Software Market Analysis by Application

    Chapter 8: Manufacturing Cost Analysis

    Chapter 9: Industrial Chain, Sourcing Strategy and Downstream Buyers

    Chapter 10: Marketing Strategy Analysis, Distributors/Traders

    Chapter 11: Market Effect Factors Analysis

    Chapter 12: Complaint Management Software Market Forecast

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

credits : Forbes


Which programming language should you learn if you want a job at Google, Amazon, Facebook or any big software company? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by John L. Miller, Software Engineer/Architect@ Microsoft, Amazon, Google, PhD, on Quora:

Getting hired by one of the big software companies requires two things:

    1. Getting chosen for an interview.
  1. Passing the interview.

In the ideal case, you know the same programming language as the people who will evaluate you technically. Your experience is compelling enough that you are chosen to be interviewed. Your approach to problem solving and coding is crystal clear, and since the interview is fluent in the programming language you use, they can admire your handiwork and be duly impressed.

In a more typical case, the interviewer knows several programming languages and is best at one. The candidate knows at least one language the interviewer knows.

For example, I might be best at C++, but I can still interview someone who is best at Java or C#. I’d have a much harder time interviewing someone who insists on using PERL. It’d be impossible for me to interview someone who used Haskell or Smalltalk: I wouldn’t be able to evaluate their approach, the elegance of what they write, and so on.

The best programming language to get a job at any big software company will depend upon the group you’re interviewing with. At Amazon some groups use C++, some use java, some use PHP or Ruby, and so on. It won’t be the same language for all of Amazon, but you are probably safe using Java for services and devices, and PHP/javascript for front end stuff.

Knowing a similar language is good enough. Not being able to communicate with the interviewer in a shared style of language is a fail.

In general, you’re safe knowing Java, C#, or C++ for any non-front-end position at a big software company.

A final caution: don’t waste time learning the language that the company is using for the group you want to go to just for that purpose: you won’t be good enough at it, and will hurt rather than help your chances. Instead, continue growing in the language you’re best with, and show what a great coder and problem solver you are in the interview.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Livetradingnews


USD/PHP (PHP=X) Taking Profits, Waiting for Re-Entry

Overall, the bias in prices is: Sideways.

By the way, prices are vulnerable to a correction towards 51.98.

The projected upper bound is: 52.78.

The projected lower bound is: 51.90.

The projected closing price is: 52.34.


A black body occurred (because prices closed lower than they opened).
During the past 10 bars, there have been 5 white candles and 5 black candles. During the past 50 bars, there have been 24 white candles and 26 black candles for a net of 2 black candles.

A spinning top occurred (a spinning top is a candle with a small real body). Spinning tops identify a session in which there is little price action (as defined by the difference between the open and the close). During a rally or near new highs, a spinning top can be a sign that prices are losing momentum and the bulls may be in trouble.

Momentum Indicators

Momentum is a general term used to describe the speed at which prices move over a given time period. Generally, changes in momentum tend to lead to changes in prices. This expert shows the current values of four popular momentum indicators.

Stochastic Oscillator

One method of interpreting the Stochastic Oscillator is looking for overbought areas (above 80) and oversold areas (below 20). The Stochastic Oscillator is 14.3458. This is an oversold reading. However, a signal is not generated until the Oscillator crosses above 20 The last signal was a sell 15 period(s) ago.

Relative Strength Index (RSI)

The RSI shows overbought (above 70) and oversold (below 30) areas. The current value of the RSI is 50.80. This is not a topping or bottoming area. A buy or sell signal is generated when the RSI moves out of an overbought/oversold area. The last signal was a sell 75 period(s) ago.

Commodity Channel Index (CCI)

The CCI shows overbought (above 100) and oversold (below -100) areas. The current value of the CCI is -85. This is not a topping or bottoming area. The last signal was a sell 5 period(s) ago.


The Moving Average Convergence/Divergence indicator (MACD) gives signals when it crosses its 9 period signal line. The last signal was a sell 1 period(s) ago.

Rex Takasugi – TD Profile

FOREX PHP= closed down -0.011 at 52.340. Volume was 100% below average (consolidating) and Bollinger Bands were 29% narrower than normal.

Open High Low Close Volume___
52.349 52.349 52.340 52.340 2

Technical Outlook
Short Term: Oversold
Intermediate Term: Bullish
Long Term: Bullish

Moving Averages: 10-period 50-period 200-period
Close: 52.51 52.15 51.38
Volatility: 3 5 6
Volume: 3,744 3,545 3,753

Short-term traders should pay closer attention to buy/sell arrows while intermediate/long-term traders should place greater emphasis on the Bullish or Bearish trend reflected in the lower ribbon.


FOREX PHP= is currently 1.9% above its 200-period moving average and is in an upward trend. Volatility is low as compared to the average volatility over the last 10 periods. Our volume indicators reflect volume flowing into and out of PHP= at a relatively equal pace (neutral). Our trend forecasting oscillators are currently bullish on PHP= and have had this outlook for the last 17 periods. our momentum oscillator has set a new 14-period low while the security price has not. This is a bearish divergence.


This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Javaworld


The Java virtual machine is a program whose purpose is to execute other programs. It’s a simple idea that also stands as one of our greatest examples of coding kung fu. The JVM upset the status quo for its time, and continues to support programming innovation today.

Use and definitions for the JVM

The JVM has two primary functions: to allow Java programs to run on any device or operating system (known as the “Write once, run anywhere” principle), and to manage and optimize program memory. When Java was released in 1995, all computer programs were written to a specific operating system, and program memory was managed by the software developer. So the JVM was a revelation.

Having a technical definition for the JVM is useful, and there’s also an everyday way that software developers think about it. Let’s break those down:

  • Technical definition: The JVM is the specification for a software program that executes code and provides the runtime environment for that code.
  • Everyday definition: The JVM is how we run our Java programs. We configure the JVM’s settings and then rely on it to manage program resources during execution.

When developers talk about the JVM, we usually mean the process running on a machine, especially a server, that represents and controls resource usage for a Java app. Contrast this to the JVM specification, which describes the requirements for building a program that performs these tasks.

Memory management in the JVM

The most common interaction with a running JVM is to check the memory usage in the heap and stack. The most common adjustment is tuning the JVM’s memory settings.

Garbage collection

Before Java, all program memory was managed by the programmer. In Java, program memory is managed by the JVM. The JVM manages memory through a process called garbage collection, which continuously identifies and eliminates unused memory in Java programs. Garbage collection happens inside a running JVM.

In the early days, Java came under a lot of criticism for not being as “close to the metal” as C++, and therefore not as fast. The garbage collection process was especially controversial. Since then, a variety of algorithms and approaches have been proposed and used for garbage collection. With consistent development and optimization, garbage collection has vastly improved.

The JVM in three parts

It could be said there are three aspects to the JVM: specification, implementation and instance. Let’s consider each of these.

1. The JVM specification

First, the JVM is a software specification. In a somewhat circular fashion, the JVM spec highlights that its implementation details are not defined within the spec, in order to allow for maximum creativity in its realization:

“To implement the Java virtual machine correctly, you need only be able to read the class file format and correctly perform the operations specified therein.”

J.S. Bach once described creating music similarly:

“All you have to do is touch the right key at the right time.”

So, all the JVM has to do is run Java programs correctly. Sounds simple, might even look simple from outside, but it is a massive undertaking, especially given the power and flexibility of the Java language.

2. JVM implementations

Implementing the JVM specification results in an actual software program, which is a JVM implementation. In fact, there are many JVM implementations, both open source and proprietary. OpenJDK’s HotSpot JVM is the reference implementation, and remains one of the most thoroughly tried-and-tested codebases in the world. HotSpot is also the most commonly used JVM.

Almost all licensed JVM’s are created as forks off the OpenJDK and the HotSpot JVM, including Oracle’s licensed JDK. Developers creating a licensed fork off the OpenJDK are often motivated by the desire to add OS-specific performance improvements. Typically, you download and install the JVM as a bundled part of a Java Runtime Environment (JRE).

3. A JVM instance

After the JVM spec has been implemented and released as a software product, you may download and run it as a program. That downloaded program is an instance (or instantiated version) of the JVM.

Most of the time, when developers talk about “the JVM,” we are referring to a JVM instance running in a software development or production environment. You might say, “Hey Anand, how much memory is the JVM on that server using?” or, “I can’t believe I created a circular call and a stack overflow error crashed my JVM. What a newbie mistake!”

Loading and executing class files in the JVM

We’ve talked about the JVM’s role in running Java applications, but how does it perform its function? In order to run Java applications, the JVM depends on the Java class loader and a Java execution engine.

The Java class loader in the JVM

Everything in Java is a class, and all Java applications are built from classes. An application could consist of one class or thousands. In order to run a Java application, a JVM must load compiled .class files into a context, such as a server, where they can be accessed. A JVM depends on its class loader to perform this function.

The Java class loader is the part of the JVM that loads classes into memory and makes them available for execution. Class loaders use techniques like lazy-loading and caching to make class loading as efficient as it can be. That said, class loading isn’t the epic brain-teaser that (say) portable runtime memory management is, so the techniques are comparatively simple.

Every Java virtual machine includes a class loader. The JVM spec describes standard methods for querying and manipulating the class loader at runtime, but JVM implementations are responsible for fulfilling these capabilities. From the developer’s perspective, the underlying class loader mechanisms are typically a black box.

The execution engine in the JVM

Once the class loader has done its work of loading classes, the JVM begins executing the code in each class. The execution engine is the JVM component that handles this function. The execution engine is essential to the running JVM. In fact, for all practical purposes, it is the JVM instance.

Executing code involves managing access to system resources. The JVM execution engine stands between the running program–with its demands for file, network and memory resources–and the operating system, which supplies those resources.

How the execution engine manages system resources

System resources can be divided into two broad categories: memory and everything else.

Recall that the JVM is responsible for disposing of unused memory, and that garbage collection is the mechanism that does that disposal. The JVM is also responsible for allocating and maintaining the referential structure that the developer takes for granted. As an example, the JVM’s execution engine is responsible for taking something like the new keyword in Java, and turning it into an OS-specific request for memory allocation.

Beyond memory, the execution engine manages resources for file system access and network I/O. Since the JVM is interoperable across operating systems, this is no mean task. In addition to each application’s resource needs, the execution engine must be responsive to each OS environment. That is how the JVM is able to handle in-the-wild demands.

JVM evolution: Past, present, future

In 1995, the JVM introduced two revolutionary concepts that have since become standard fare for modern software development: “Write once, run anywhere” and automatic memory management. Software interoperability was a bold concept at the time, but few developers today would think twice about it. Likewise, whereas our engineering forebears had to manage program memory themselves, my generation grew up with garbage collection.

We could say that James Gosling and Brendan Eich invented modern programming, but thousands of others have refined and built on their ideas over the following decades. Whereas the Java virtual machine was originally just for Java, today it has evolved to support many scripting and programming languages, including Scala, Groovy, and Kotlin. Looking forward, it’s hard to see a future where the JVM isn’t a prominent part of the development landscape.

More JavaWorld tutorials about the JVM

  • Trash talk, how Java recycles memory
  • The lean, mean virtual machine
  • Bytecode basics: A first look at bytecodes in the JVM
  • JVM exceptions: How the JVM handles exceptions
  • JVM performance optimization: A JVM technology primer

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Infoworld


MySQL, the popular open-source database that’s a standard element in many web application stacks, has unveiled the first release candidate for version 8.0.

Features to be rolled out in MySQL 8.0 include:

First-class support for Unicode 9.0 out of the box.
Window functions and recursive SQL syntax, for queries that previously weren’t possible or would have been difficult to write.
Expanded support for native JSON data and document-store functionality.
With version 8.0, MySQL is jumping several versions in its numbering (from 5.5), due to 6.0 being nixed and 7.0 being reserved for the clustering version of MySQL.

MySQL 8.0’s expected release date
MySQL hasn’t committed to a release date for MySQL 8.0, by MySQL’s policy is “a new [general] release every 18-24 months.” The last general release was October 21, 2015, for MySQL 5.7, so MySQL 8.0’s production version is likely to come in October 2017.

MySQL 8.0’s road to standard Unicode
Moving to Unicode by default is arguably one of the biggest changes planned. MySQL has long had persistent, persnickety problems with Unicode. So, a long standing game plan for MySQL 8.0 was to fix as many of those lingering Unicode issues as possible.

MySQL 8.0 no longer uses latin1 as the default encoding, to discourage new users from choosing a troublesome legacy option. The recommended default character set for MySQL 8.0 is now utf8mb4, which is intended to be faster than the now-deprecated utf8mb3 character set and also to support more flexible collations and case sensitivity.

The improved Unicode will not only support non-Western character sets but the rise of emoji.

MySQL 8.0 gets current with window functions
Many other implementations of SQL support window functions, a way to perform aggregate calculations across multiple rows while still allowing access to the individual rows from the query. It’s possible to do this in MySQL without window function support in the database, but it’s cumbersome and slow. To overcome its window deficit, MySQL 8.0 adds window functions via the standard OVER SQL keyword, in much the same way it is implemented in competing products like PostgreSQL.

Another feature in the same vein, recursive common table expressions, lets you perform recursive operations as part of a query, without having to resort to cursors or other performance-sapping workarounds.

MySQL 8.0 works better with documents and JSON
With MySQL 5.7 came JSON support, to make MySQL competitive with NoSQL databases that use JSON natively. MySQL 8.0 expands JSON support with better performance, functions to allow extracting ranges from a JSON query (such as a “top N”-type request), and new aggregation functions that let MySQL-native structured data and semistructured JSON data be merged in a query.

Another improvement related to JSON involve MySQL’s document-store abilities. Reads and writes to MySQL’s document store are transactionally consistent, allowing rollback operations on changes to JSON data. Document data stored in the open GeoJSON format for geospatial data can be indexed, so you can search by proximity.

The other key features in MySQL 8.0
Other changes planned for MySQL 8.0 include:

More options for how to handle locked rows, via the SKIP LOCKED and NOWAIT keywords. SKIP LOCKED allows locked rows to be skipped during an operation; NOWAIT throws an error immediately on encountering a locked row.
MySQL can automatically scale to the total amount of memory available, to make the best possible use of virtual machine deployments.
Indexes can be manually excluded from the query optimizer via the “invisible index” feature. Indexes marked as invisible are kept up to date with changes to tables, but aren’t used to optimize queries. One suggested use for this is to nondestructively determine if a particular index needs to be kept or not.
Where to download MySQL 8.0
You can download the beta versions of MySQL 8.0 now for Windows, MacOS, several versions of Linux, FreeBSD, and Solaris; the source code is also available. Scroll down the downloads page and go to the Development Releases tab to get them.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits :


On 27 May 2003, software developer Matt Mullenweg launched the first version of a computer programme which went on to revolutionise publishing. 15 years later, WordPress is the engine behind 30% of the world’s websites, powering major media and e-commerce sites, as well as small blogs – the personal projects that inspired its creation.

This weekend will see WordPress commemorated at WordCamp Belfast – a volunteer run-event that is one of thousands of meetups held around the globe, run by and for the international WordPress community.
And after fifteen years, it’s still very much a community project: open-source, free to all, maintained and developed by volunteers. No one company ‘owns’ the WordPress software in the way, for example, that Microsoft Office or Apple’s iOS are licensed and sold. Instead, WordPress is licensed under the GNU General Public License, which requires that it be free for anyone to copy, distribute or modify the software.
That’s part of its success, of course. Free to download and easy to install, WordPress was built to run with other open-source tools, on what is known as the LAMP stack: Linux (operating system), Apache (web server), MySQL (database), and PHP (programming language). All of this software is free to copy and install and has gone on to dominate the web world for similar reasons. (Almost half of all web servers run on Apache).

To that essential package of free website software, WordPress added a killer application: a content management system (CMS).

Web pages are made of text, pictures, and increasingly, audio and video. When Tim Berners-Lee invented the World Wide Web, its essential feature was Hypertext Markup Language (HTML): a collection of text ‘tags’ which tell a web browser how to arrange and display the different pieces of the page: how to format text; where to put the images; where the links should be – and what they should link to.

HTML isn’t difficult – most people can learn to hand-code a basic web page in under an hour. But it’s tedious and repetitive – even with the visual editors and integrated development environments which soon came along to lighten the load. And as web pages got richer and more elaborate, it became obvious that there was a need to separate the code from the content.

The growth of blogging ushered in an era of self-publishing, and the first stirrings of social media that became known as Web 2.0. Sites like LiveJournal, MySpace and Bebo let users type their thoughts and click publish. But, though free, they remained proprietary, like Facebook is today.

For those who wanted complete control: to publish and easily maintain a website, on their own hosting and domain name, WordPress completed the picture.

Michele Neylon, CEO of Ireland’s leading web hosting company, Blacknight, recalls the steady migration of websites from platforms like Blogspot (Blogger) to self-hosted WordPress in the early days.

“The initial movement came from bloggers who had outgrown those services and wanted more freedom to develop their sites. But we soon saw small businesses and website developers turn to WordPress also, not just because it was free, but because it was powerful and flexible as well”.

The familiar WordPress dashboard was easy to master and teach, so that relatively non-technical people could control their own content and keep their websites up to date. But the back-end familiarity didn’t come at the price of forcing everyone to be the same.

From its earliest days WordPress was designed for extensibility. A proliferation of independently developed themes and plugins brought support for a variety of types of website. WordPress became more than an application: it is a platform for any type of web application you can imagine.

Not all open-source projects are a success, of course, but the growth of WordPress suggests a careful stewardship of the project amid the sometimes competing interests of the various users, developers, businesses and other stakeholders. And while WordPress itself is an unqualified success, what of its founder and his company?

Automattic, the company founded by Matt Mullenweg, has invested millions of dollars worth of development effort into WordPress over the years – a product which it does not own, and which is given away freely. So how does that work?

At the heart of the open-source idea is that the code is community property. So, while it is true that one contributor may not exclusively reserve the benefits of his or her work, it is a corollary that all the contributors benefit from everyone’s contribution. Open-source software, in particular, is often considered to be of a higher quality than proprietary code, simply because it is open to scrutiny by a much greater number of ‘eyeballs’.

In a world of spyware and security concerns, open-source has come to stand for trustworthiness and reliability. When anyone can view the source code in which a programme is written (and compile it themselves if they’re especially paranoid!), it becomes impossible to deliberately insert a back-door. And a global army of volunteers keeps vigil for vulnerabilities, which are quickly patched.

Successful open-source companies like Automattic don’t charge for their code – but they can and do make money from their expertise. While is the home of the open-source project, is Automattic’s revenue source: a hosting platform which uses essentially the same open-source code to run millions of websites; from small blogs hosted for free, to large commercial media sites like Time and Fortune, and everything in between, including your truly: Irish Tech News.

And it has branched out into related areas, such as the new .BLOG domain name which is run by a subsidiary company. is now the sixth largest web platform in terms of traffic with 141 million unique visitors per month, and Automattic was valued at $1.16 billion in 2014. Yet it has just 720 employees, most of whom work from their homes, spread across 50 countries around the world.

One of those is Corkman Donncha Ó Caoimh – in fact he was the first person hired by Mullenweg. Like many open-source developers, he got involved to scratch a personal itch. Looking for blogging software for members of the Irish Linux Users Group in 2003, he made a fork of the original open-source codebase which was also used by Mullenweg and Mike Little to create WordPress (a project called b2). Ó Caoimh’s code was maintained as a separate project called WordPress MU (multi-user), until it was re-merged into the main codebase with WordPress Version 3.0 in 2010.

WordPress users and contributors around the world will celebrate the anniversary this weekend with numerous meetups as well as online events. However, while there are more than 200 WordCamp conferences held each year, only the Belfast one coincides with the actual anniversary itself on Sunday. It’s sold out: 150 people are expected to attend the two-day event at the Peter Froggatt Centre in Queen’s University, and the organisers are inviting others to participate by submitting ‘video selfies’ – messages of friendship from across the world.

WordCamp Belfast is supported by sponsors including Blacknight. Michele Neylon says it’s a very positive experience for the company, which also sponsored WordCamp Dublin last year.

“WordCamp attendees have that open-source, DIY attitude”, he says. “They’re makers and creators: they don’t sit around waiting for things to happen. They make them happen! We love the engagement we get with our customers at these kinds of events.”

Tickets for WordCamp Belfast are sold out, but there is a handful of places still available for those interested in becoming WordPress contributors, at the free Contributor Event on Friday afternoon at the Farset Labs in the city. For more information visit

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Analyticsindiamag


Relational database systems (RDBMSs) pay a lot of attention to data consistency and compliance with a formal database schema. New data or modifications to existing data are not accepted unless they satisfy constraints represented in this schema in terms of data types, referential integrity etc. The way in which RDBMSs coordinate their transactions guarantees that the entire database is consistent at all times, the well-known ACID properties: atomicity, consistency, isolation and durability. Consistency is usually a desirable property; one normally wouldn’t want for erroneous data to enter the system, nor for e.g., a money transfer to be aborted halfway, with only one of the two accounts updated.

Yet, sometimes this focus on consistency may become a burden, because it induces (sometimes unnecessary) overhead and hampers scalability and flexibility. RDBMSs are at their best when performing intensive read/write operations on small or medium sized data sets or when executing larger batch processes, but with only a limited number of simultaneous transactions. As the data volumes or the number of parallel transactions increase, capacity can be increased by vertical scaling (also called scaling up), i.e. by extending storage capacity and/or CPU power of the database server. However, obviously, there are hardware induced limitations to vertical scaling.

Therefore, further capacity increases need to be realized by horizontal scaling (also known as scaling out), with multiple DBMS servers being arranged in a cluster. The respective nodes in the cluster can balance workloads among one another and scaling is achieved by adding nodes to the cluster, rather than extending the capacity of individual nodes. Such a clustered architecture is an essential prerequisite to cope with the enormous demands of recent evolutions such as big data (analytics), cloud computing and all kinds of responsive web applications. It provides the necessary performance, which cannot be realized by a single server, but also guarantees availability, with data being replicated over multiple nodes and other nodes taking over their neighbor’s workload if one node fails.

However, RDBMSs are not good at extensive horizontal scaling. Their approach towards transaction management and their urge to keep data consistent at all times, induces a large coordination overhead as the number of nodes increases. In addition, the rich querying functionality may be overkill in many big data settings, where applications merely need high capacity to ‘put’ and ‘get’ data items, with no demand for complex data interrelationships nor selection criteria. Also, big data settings often focus on semi-structured data or on data with a very volatile structure (think for instance about sensor data, images, audio data, and so on), where the rigid database schemas of RDBMSs are a source of inflexibility.

None of this means that relational databases will become obsolete soon. However, the ‘one size fits all’ era, where RDBMSs were used in nearly any data and processing context, seems to have come to an end. RDBMSs are still the way to go when storing up to medium-sized volumes of highly structured data, with strong emphasis on consistency and extensive querying facilities. Where massive volumes, flexible data structures, scalability and availability are more important, other systems may be called for. This need resulted in the emergence of NoSQL databases.

The Emergence of the NoSQL Movement

The term “NoSQL” has become overloaded throughout the past decade, so the moniker now relates to many meanings and systems. The modern NoSQL movement describes databases that store and manipulate data in other formats than tabular relations, i.e. non-relational databases. The movement should have more appropriately been called NoREL, especially since some of these non-relational databases actually provide query language facilities close to SQL. Because of such reasons, people have changed the original meaning of the NoSQL movement to stand for “not only SQL” or “not relational” instead of “not SQL”.

What makes NoSQL databases different from other, legacy, non-relational systems which have existed since as early as the 1970s? The renewed interest in non-relational database systems stems from Web 2.0 companies in the early 2000s. Around this period, up-and-coming web companies, such as Facebook, Google, and Amazon were increasingly being confronted with huge amounts of data to be processed, often under time-sensitive constraints. For example, think about an instantaneous Google search query, or thousands of users accessing Amazon product pages or Facebook profiles simultaneously.

Often rooted in the open source community, the characteristics of the systems developed to deal with these requirements are very diverse. However, their common ground is that they try to avoid, at least to some extent, the shortcomings of RDBMSs in this respect. Many aim at near linear horizontal scalability, which is achieved by distributing data over a cluster of database nodes for the sake of performance (parallelism and load balancing) and availability (replication and failover management). A certain measure of data consistency is often sacrificed in return. A term frequently used in this respect is eventual consistency; the data, and respective replicas of the same data item, will become consistent in time after each transaction, but continuous consistency is not guaranteed.

The relational data model is cast aside for other modelling paradigms, which are typically less rigid and better able to cope with quickly evolving data structures. Often, the API (Application Programming Interface) and/or query mechanism are much simpler than in a relational setting. The Comparison Box provides a more detailed comparison of the typical characteristics of NoSQL databases against those of relational systems. Note that different categories of NoSQL databases exist and that even the members of a single category can be very diverse. No single NoSQL system will exhibit all of these properties.

We note, however, that the explosion of popularity of NoSQL data storage layers should be put in perspective considering their limitations. Most NoSQL implementations have yet to prove their true worth in the field (most are very young and in development). Most implementations sacrifice ACID concerns in favor of being eventually consistent, and the lack of relational support makes expressing some queries or aggregations particularly difficult, with map-reduce interfaces being offered as a possible, but harder to learn and use, alternative.

Combined with the fact that RDBMSs do provide strong support for transactionality, durability and manageability, quite a few early adopters of NoSQL were confronted with some sour lessons.  See for instance the FreeBSD maintainers speaking out against MongoDB’s lack of on-disk consistency support[1], Digg struggling with the NoSQL Cassandra database after switching from MySQL[2] and Twitter facing similar issues as well (which also ended up sticking with a MySQL cluster for a while longer)[3], or the fiasco of, where the IT team also went with a badly-suited NoSQL database[4].  It would be an over-simplification to reduce the choice between RDBMSs and NoSQL databases to a choice between consistency and integrity on the one hand, and scalability and flexibility on the other. The market of NoSQL systems is far too diverse for that. Still, this tradeoff will often come into play when deciding on taking the NoSQL route. We see many NoSQL vendors focusing again on robustness and durability. We also observe traditional RDBMS vendors implementing features that let you build schema-free, scalable data stores inside a traditional RDBMS, capable to store nested, semi-structured documents, as this seems to remain the true selling point of most NoSQL databases, especially those in the document store category.  Some vendors have already adopted “NewSQL” as a new term to describe modern relational database management systems that aim to blend the scalable performance and flexibility of NoSQL systems with the robustness guarantees of a traditional DBMS.

Expect the future trend to continue towards adoption of such “blended systems”, except for use cases that require specialized, niche database management systems. In these settings, the NoSQL movement has rightly taught users that the one size fits all mentality of relational systems is no longer applicable and should be replaced by finding the right tool for the job. For instance, graph databases arise as being “hyper-relational” databases, which makes relations first class citizens next to records themselves rather than doing away with them altogether. Graph databases express complicated queries in a straightforward way, especially where one must deal with many, nested, or hierarchical relations between objects.  The below table concludes this article by summarizing the differences between traditional RDBMSs, NoSQL DBMSs and NewSQL DBMSs.

Wilfried Lemahieu is a professor at KU Leuven, Faculty of Economics and Business, where he also holds the position of Dean. His teaching, for which he was awarded a ‘best teacher recognition’ includes Database Management, Enterprise Information Management and Management Informatics. His research focuses on big data storage and integration, data quality, business process management and service-oriented architectures. In this context, he collaborates extensively with a variety of industry partners, both local and international. His research is published in renowned international journals and he is a frequent lecturer for both academic and industry audiences. See for further details.

Bart Baesens is a professor of Big Data and Analytics at KU Leuven (Belgium) and a lecturer at the University of Southampton (United Kingdom). He has done extensive research on Big Data & Analytics and Credit Risk Modeling. He wrote more than 200 scientific papers some of which have been published in well-known international journals and presented at international top conferences. He received various best paper and best speaker awards. Bart is the author of 8 books: Credit Risk Management: Basic Concepts (Oxford University Press, 2009), Analytics in a Big Data World (Wiley, 2014), Beginning Java Programming (Wiley, 2015), Fraud Analytics using Descriptive, Predictive and Social Network Techniques (Wiley, 2015), Credit Risk Analytics (Wiley, 2016), Profit-Driven Business Analytics (Wiley, 2017), Practical Web Scraping for Data Science (Apress, 2018) and Principles of Database Management (Cambridge University Press, 2018). He sold more than 20.000 copies of these books worldwide, some of which have been translated in Chinese, Russian and Korean. His research is summarized at

Seppe vanden Broucke works as an assistant professor at the Faculty of Economics and Business, KU Leuven, Belgium. His research interests include business data mining and analytics, machine learning, process management and process mining. His work has been published in well-known international journals and presented at top conferences. He is also author of the book Beginning Java Programming (Wiley, 2015) of which more than 4000 copies were sold and which was also translated in Russian. Seppe’s teaching includes Advanced Analytics, Big Data and Information Management courses. He also frequently teaches for industry and business audiences. See for further details.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Channel9.msdn


This week’s episode of Data Exposed welcomes Abhi Abhishek to the show! Abhi is a Software Engineer in the SQL Server engineering group focusing on SQL Tools. Today he is in the studio to talk about mssql-cli, a new open source and cross-platform command line tool for SQL Server, Azure SQL Database, and Azure SQL Data Warehouse.

In this session, Abhi talks about the history of mssql-cli by forking pgcli, along with mssql-cli features including:

  • Syntax Highlighting
  • Intellisense
  • Output formatting w/ horizontal paging
  • Multi-line mode
  • Join suggestions
  • Special Commands
  • History management

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : 3dprint


Modern CAD platform Onshape, formed by several former SOLIDWORKS employees, is based entirely in the cloud, making it accessible by computer, tablet, or phone. An i.materialise plugin in 2016 made the platform even more accessible, and Onshape introduced two new cloud-integrated apps this past February, following those up with its Design Data Management 2.0 in March.

Today, Onshape has rolled out Onshape Enterprise, the new premium edition of its CAD software that helps companies speed up the design process. The company has been working on this complete design platform, which provides multiple contributors from different locations with access to design data, for over a year, and some of its customers have enjoyed early production access for the last six months. This new edition allows companies to work quickly under pressure while still maintaining control, and also helps them protect their IP.

According to an Onshape blog post, “In larger companies, an ever increasing number of internal and external stakeholders across many locations need access to design data for products that are produced with increasingly complex and encumbered design processes. As these companies strain to make decades-old CAD and PDM technology operate in their complex environment, they are forced to choose between being agile or having some semblance of control.”

Recurring problems that companies frequently face include visibility, maintaining control of IP, providing new team members with access to contribute to ongoing projects, and giving engineers the agility to balance administrative and software issues with actual design work; according to a Tech Clarity study of design and engineering teams, 98% of companies report that they have experienced negative business impacts because of design bottlenecks. Onshape Enterprise was built to solve these problems.

“Onshape Enterprise’s analytics help us look back on a project and understand how our design team allocated its time, what took the most time, and how much time was required overall,” said Stefan van Woerden, an R&D Engineer for Dodson Motorsport. “We find this data very valuable to plan future projects. As our company grows, the ability to get this information in real time is extremely useful.”

First off, the software is fully customizable for approval workflows, company security roles, projects, roles, and sharing privileges.

It also has several new features, including but not limited to:

  • Project activity tracking
  • Custom domain address and single sign-on
  • Comprehensive audit logs
  • Real-time analytics
  • Instant set-up and provisioning
  • Full and Light User types, including access for both employee and guest users

All user activity as it happens can be permanently recorded, as Onshape Enterprise runs on a centralized cloud database, rather than on federated vaults and local file systems. This will allow managers to truly gain an understanding for what their team is doing.

An Enterprise Light User, which costs just 1/10th of an Enterprise Full User seat in terms of subscription fees, can receive privileges like commenting, exporting, and viewing that work well for non-engineering roles, as they provide visibility to activity and design data without the CAD modeling capabilities. The new Guest status gives users access only to the documents specifically shared with them, like contractors, suppliers, and vendors – resulting in better information flow.

Using comprehensive audit logs to figure who did what, from where, and when, Onshape Enterprise users help companies protect and control their valuable IP. Users can configure permission schemes and roles, which is necessary for companies that employ hundreds, or even thousands, of people that would otherwise have dangerously unlimited access to the company’s IP.

It can also be a burden for companies when dealing with the overhead of having IT teams provision the access to design data, as it typically involves frequent service packs, a full-time help desk, long installs, and new hardware.

“With Onshape Enterprise, anyone on our production floor can look at the 3D model, they can look at the drawings, they can access the machine code and always know it’s the most up-to-date information. We’re really extended our engineering force without adding to the engineering team,” said Will Tiller, an engineering manager at Dixie Iron Works.

Over the last 10 years, there have been increases in the amount of real-time data efficiently flowing in all of the departments in large companies…with the notable exception of engineering. Onshape Enterprise can help companies overcome this visibility issue with efficient new tools.

“Half my product development team is based in Juarez, Mexico, and the other half works in Elgin, Illinois – and I split my time between both facilities. Onshape Enterprise helps me keep track of who is working on what and when, so I can better prioritize how our engineers spend their time,” said Dennis Breen, the Vice President of Engineering and Technology for Capsonic Automotive & Aerospace. “I also like the option to add Light Users so our manufacturing engineers and automation engineers can immediately access the latest 3D CAD data and use it to improve our processes.”

Onshape Enterprise gives large companies a platform to work faster, with more control, security, and access, and with maximum insight. This helps relieve the symptoms of a condition that Onshape has dubbed Enterprise Design Gridlock, where companies have outgrown old CAD and PDM technology that slows the design process down. With this new premium edition, companies can connect their entire enterprise with Onshape.

Onshape Standard reimagined the possibilities of parametric modeling, while the flagship Onshape Professional introduced approval workflows, company-level controls, and powerful release management. Now, Onshape Enterprise helps large teams work faster, while maintaining control, with the addition of real-time analytics, advanced security, and granular access controls.

To learn more about Onshape Enterprise, register now for a live webinar by David Corcoran, Onshape’s VP of Product, that will take place next Thursday, May 31, at 11 AM (EDT).

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.