Credits : Analyticsindiamag


Relational database systems (RDBMSs) pay a lot of attention to data consistency and compliance with a formal database schema. New data or modifications to existing data are not accepted unless they satisfy constraints represented in this schema in terms of data types, referential integrity etc. The way in which RDBMSs coordinate their transactions guarantees that the entire database is consistent at all times, the well-known ACID properties: atomicity, consistency, isolation and durability. Consistency is usually a desirable property; one normally wouldn’t want for erroneous data to enter the system, nor for e.g., a money transfer to be aborted halfway, with only one of the two accounts updated.

Yet, sometimes this focus on consistency may become a burden, because it induces (sometimes unnecessary) overhead and hampers scalability and flexibility. RDBMSs are at their best when performing intensive read/write operations on small or medium sized data sets or when executing larger batch processes, but with only a limited number of simultaneous transactions. As the data volumes or the number of parallel transactions increase, capacity can be increased by vertical scaling (also called scaling up), i.e. by extending storage capacity and/or CPU power of the database server. However, obviously, there are hardware induced limitations to vertical scaling.

Therefore, further capacity increases need to be realized by horizontal scaling (also known as scaling out), with multiple DBMS servers being arranged in a cluster. The respective nodes in the cluster can balance workloads among one another and scaling is achieved by adding nodes to the cluster, rather than extending the capacity of individual nodes. Such a clustered architecture is an essential prerequisite to cope with the enormous demands of recent evolutions such as big data (analytics), cloud computing and all kinds of responsive web applications. It provides the necessary performance, which cannot be realized by a single server, but also guarantees availability, with data being replicated over multiple nodes and other nodes taking over their neighbor’s workload if one node fails.

However, RDBMSs are not good at extensive horizontal scaling. Their approach towards transaction management and their urge to keep data consistent at all times, induces a large coordination overhead as the number of nodes increases. In addition, the rich querying functionality may be overkill in many big data settings, where applications merely need high capacity to ‘put’ and ‘get’ data items, with no demand for complex data interrelationships nor selection criteria. Also, big data settings often focus on semi-structured data or on data with a very volatile structure (think for instance about sensor data, images, audio data, and so on), where the rigid database schemas of RDBMSs are a source of inflexibility.

None of this means that relational databases will become obsolete soon. However, the ‘one size fits all’ era, where RDBMSs were used in nearly any data and processing context, seems to have come to an end. RDBMSs are still the way to go when storing up to medium-sized volumes of highly structured data, with strong emphasis on consistency and extensive querying facilities. Where massive volumes, flexible data structures, scalability and availability are more important, other systems may be called for. This need resulted in the emergence of NoSQL databases.

The Emergence of the NoSQL Movement

The term “NoSQL” has become overloaded throughout the past decade, so the moniker now relates to many meanings and systems. The modern NoSQL movement describes databases that store and manipulate data in other formats than tabular relations, i.e. non-relational databases. The movement should have more appropriately been called NoREL, especially since some of these non-relational databases actually provide query language facilities close to SQL. Because of such reasons, people have changed the original meaning of the NoSQL movement to stand for “not only SQL” or “not relational” instead of “not SQL”.

What makes NoSQL databases different from other, legacy, non-relational systems which have existed since as early as the 1970s? The renewed interest in non-relational database systems stems from Web 2.0 companies in the early 2000s. Around this period, up-and-coming web companies, such as Facebook, Google, and Amazon were increasingly being confronted with huge amounts of data to be processed, often under time-sensitive constraints. For example, think about an instantaneous Google search query, or thousands of users accessing Amazon product pages or Facebook profiles simultaneously.

Often rooted in the open source community, the characteristics of the systems developed to deal with these requirements are very diverse. However, their common ground is that they try to avoid, at least to some extent, the shortcomings of RDBMSs in this respect. Many aim at near linear horizontal scalability, which is achieved by distributing data over a cluster of database nodes for the sake of performance (parallelism and load balancing) and availability (replication and failover management). A certain measure of data consistency is often sacrificed in return. A term frequently used in this respect is eventual consistency; the data, and respective replicas of the same data item, will become consistent in time after each transaction, but continuous consistency is not guaranteed.

The relational data model is cast aside for other modelling paradigms, which are typically less rigid and better able to cope with quickly evolving data structures. Often, the API (Application Programming Interface) and/or query mechanism are much simpler than in a relational setting. The Comparison Box provides a more detailed comparison of the typical characteristics of NoSQL databases against those of relational systems. Note that different categories of NoSQL databases exist and that even the members of a single category can be very diverse. No single NoSQL system will exhibit all of these properties.

We note, however, that the explosion of popularity of NoSQL data storage layers should be put in perspective considering their limitations. Most NoSQL implementations have yet to prove their true worth in the field (most are very young and in development). Most implementations sacrifice ACID concerns in favor of being eventually consistent, and the lack of relational support makes expressing some queries or aggregations particularly difficult, with map-reduce interfaces being offered as a possible, but harder to learn and use, alternative.

Combined with the fact that RDBMSs do provide strong support for transactionality, durability and manageability, quite a few early adopters of NoSQL were confronted with some sour lessons.  See for instance the FreeBSD maintainers speaking out against MongoDB’s lack of on-disk consistency support[1], Digg struggling with the NoSQL Cassandra database after switching from MySQL[2] and Twitter facing similar issues as well (which also ended up sticking with a MySQL cluster for a while longer)[3], or the fiasco of, where the IT team also went with a badly-suited NoSQL database[4].  It would be an over-simplification to reduce the choice between RDBMSs and NoSQL databases to a choice between consistency and integrity on the one hand, and scalability and flexibility on the other. The market of NoSQL systems is far too diverse for that. Still, this tradeoff will often come into play when deciding on taking the NoSQL route. We see many NoSQL vendors focusing again on robustness and durability. We also observe traditional RDBMS vendors implementing features that let you build schema-free, scalable data stores inside a traditional RDBMS, capable to store nested, semi-structured documents, as this seems to remain the true selling point of most NoSQL databases, especially those in the document store category.  Some vendors have already adopted “NewSQL” as a new term to describe modern relational database management systems that aim to blend the scalable performance and flexibility of NoSQL systems with the robustness guarantees of a traditional DBMS.

Expect the future trend to continue towards adoption of such “blended systems”, except for use cases that require specialized, niche database management systems. In these settings, the NoSQL movement has rightly taught users that the one size fits all mentality of relational systems is no longer applicable and should be replaced by finding the right tool for the job. For instance, graph databases arise as being “hyper-relational” databases, which makes relations first class citizens next to records themselves rather than doing away with them altogether. Graph databases express complicated queries in a straightforward way, especially where one must deal with many, nested, or hierarchical relations between objects.  The below table concludes this article by summarizing the differences between traditional RDBMSs, NoSQL DBMSs and NewSQL DBMSs.

Wilfried Lemahieu is a professor at KU Leuven, Faculty of Economics and Business, where he also holds the position of Dean. His teaching, for which he was awarded a ‘best teacher recognition’ includes Database Management, Enterprise Information Management and Management Informatics. His research focuses on big data storage and integration, data quality, business process management and service-oriented architectures. In this context, he collaborates extensively with a variety of industry partners, both local and international. His research is published in renowned international journals and he is a frequent lecturer for both academic and industry audiences. See for further details.

Bart Baesens is a professor of Big Data and Analytics at KU Leuven (Belgium) and a lecturer at the University of Southampton (United Kingdom). He has done extensive research on Big Data & Analytics and Credit Risk Modeling. He wrote more than 200 scientific papers some of which have been published in well-known international journals and presented at international top conferences. He received various best paper and best speaker awards. Bart is the author of 8 books: Credit Risk Management: Basic Concepts (Oxford University Press, 2009), Analytics in a Big Data World (Wiley, 2014), Beginning Java Programming (Wiley, 2015), Fraud Analytics using Descriptive, Predictive and Social Network Techniques (Wiley, 2015), Credit Risk Analytics (Wiley, 2016), Profit-Driven Business Analytics (Wiley, 2017), Practical Web Scraping for Data Science (Apress, 2018) and Principles of Database Management (Cambridge University Press, 2018). He sold more than 20.000 copies of these books worldwide, some of which have been translated in Chinese, Russian and Korean. His research is summarized at

Seppe vanden Broucke works as an assistant professor at the Faculty of Economics and Business, KU Leuven, Belgium. His research interests include business data mining and analytics, machine learning, process management and process mining. His work has been published in well-known international journals and presented at top conferences. He is also author of the book Beginning Java Programming (Wiley, 2015) of which more than 4000 copies were sold and which was also translated in Russian. Seppe’s teaching includes Advanced Analytics, Big Data and Information Management courses. He also frequently teaches for industry and business audiences. See for further details.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Channel9.msdn


This week’s episode of Data Exposed welcomes Abhi Abhishek to the show! Abhi is a Software Engineer in the SQL Server engineering group focusing on SQL Tools. Today he is in the studio to talk about mssql-cli, a new open source and cross-platform command line tool for SQL Server, Azure SQL Database, and Azure SQL Data Warehouse.

In this session, Abhi talks about the history of mssql-cli by forking pgcli, along with mssql-cli features including:

  • Syntax Highlighting
  • Intellisense
  • Output formatting w/ horizontal paging
  • Multi-line mode
  • Join suggestions
  • Special Commands
  • History management

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : 3dprint


Modern CAD platform Onshape, formed by several former SOLIDWORKS employees, is based entirely in the cloud, making it accessible by computer, tablet, or phone. An i.materialise plugin in 2016 made the platform even more accessible, and Onshape introduced two new cloud-integrated apps this past February, following those up with its Design Data Management 2.0 in March.

Today, Onshape has rolled out Onshape Enterprise, the new premium edition of its CAD software that helps companies speed up the design process. The company has been working on this complete design platform, which provides multiple contributors from different locations with access to design data, for over a year, and some of its customers have enjoyed early production access for the last six months. This new edition allows companies to work quickly under pressure while still maintaining control, and also helps them protect their IP.

According to an Onshape blog post, “In larger companies, an ever increasing number of internal and external stakeholders across many locations need access to design data for products that are produced with increasingly complex and encumbered design processes. As these companies strain to make decades-old CAD and PDM technology operate in their complex environment, they are forced to choose between being agile or having some semblance of control.”

Recurring problems that companies frequently face include visibility, maintaining control of IP, providing new team members with access to contribute to ongoing projects, and giving engineers the agility to balance administrative and software issues with actual design work; according to a Tech Clarity study of design and engineering teams, 98% of companies report that they have experienced negative business impacts because of design bottlenecks. Onshape Enterprise was built to solve these problems.

“Onshape Enterprise’s analytics help us look back on a project and understand how our design team allocated its time, what took the most time, and how much time was required overall,” said Stefan van Woerden, an R&D Engineer for Dodson Motorsport. “We find this data very valuable to plan future projects. As our company grows, the ability to get this information in real time is extremely useful.”

First off, the software is fully customizable for approval workflows, company security roles, projects, roles, and sharing privileges.

It also has several new features, including but not limited to:

  • Project activity tracking
  • Custom domain address and single sign-on
  • Comprehensive audit logs
  • Real-time analytics
  • Instant set-up and provisioning
  • Full and Light User types, including access for both employee and guest users

All user activity as it happens can be permanently recorded, as Onshape Enterprise runs on a centralized cloud database, rather than on federated vaults and local file systems. This will allow managers to truly gain an understanding for what their team is doing.

An Enterprise Light User, which costs just 1/10th of an Enterprise Full User seat in terms of subscription fees, can receive privileges like commenting, exporting, and viewing that work well for non-engineering roles, as they provide visibility to activity and design data without the CAD modeling capabilities. The new Guest status gives users access only to the documents specifically shared with them, like contractors, suppliers, and vendors – resulting in better information flow.

Using comprehensive audit logs to figure who did what, from where, and when, Onshape Enterprise users help companies protect and control their valuable IP. Users can configure permission schemes and roles, which is necessary for companies that employ hundreds, or even thousands, of people that would otherwise have dangerously unlimited access to the company’s IP.

It can also be a burden for companies when dealing with the overhead of having IT teams provision the access to design data, as it typically involves frequent service packs, a full-time help desk, long installs, and new hardware.

“With Onshape Enterprise, anyone on our production floor can look at the 3D model, they can look at the drawings, they can access the machine code and always know it’s the most up-to-date information. We’re really extended our engineering force without adding to the engineering team,” said Will Tiller, an engineering manager at Dixie Iron Works.

Over the last 10 years, there have been increases in the amount of real-time data efficiently flowing in all of the departments in large companies…with the notable exception of engineering. Onshape Enterprise can help companies overcome this visibility issue with efficient new tools.

“Half my product development team is based in Juarez, Mexico, and the other half works in Elgin, Illinois – and I split my time between both facilities. Onshape Enterprise helps me keep track of who is working on what and when, so I can better prioritize how our engineers spend their time,” said Dennis Breen, the Vice President of Engineering and Technology for Capsonic Automotive & Aerospace. “I also like the option to add Light Users so our manufacturing engineers and automation engineers can immediately access the latest 3D CAD data and use it to improve our processes.”

Onshape Enterprise gives large companies a platform to work faster, with more control, security, and access, and with maximum insight. This helps relieve the symptoms of a condition that Onshape has dubbed Enterprise Design Gridlock, where companies have outgrown old CAD and PDM technology that slows the design process down. With this new premium edition, companies can connect their entire enterprise with Onshape.

Onshape Standard reimagined the possibilities of parametric modeling, while the flagship Onshape Professional introduced approval workflows, company-level controls, and powerful release management. Now, Onshape Enterprise helps large teams work faster, while maintaining control, with the addition of real-time analytics, advanced security, and granular access controls.

To learn more about Onshape Enterprise, register now for a live webinar by David Corcoran, Onshape’s VP of Product, that will take place next Thursday, May 31, at 11 AM (EDT).

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Ccredits : Smallbiztrends


Computer-aided design, or CAD, is an essential tool for any manufacturing or product design business. There are plenty of different software programs out there that can help you with this function, but knowing which ones to choose can be a bit tricky.

Dan Taylor, content analyst at software review and research platform Capterra said in an email to Small Business Trends, “You have to be careful choosing CAD software, because while CAD is used in manufacturing it is also used in construction, and some software may be suited more for one than the other. This is why it’s important to try out software first.”

CAD Tools

If you’re looking for a new CAD software to try out for your manufacturing business, here are a few different options to consider.


AutoCAD is a 3D CAD program that Taylor says is popular with a lot of manufacturing and product design companies. It’s available for Mac and Windows on a subscription basis, with different rates depending on the length of your subscription. Features include 3D modeling and visualization, customization options and a mobile app for working on the go.


From TurboCAD, DesignCAD is a software suite that offers both 2D and 3D design options. The 3D CAD program includes features for rendering, animation, modeling and more. The program costs $99.99 with optional upgrades also available.

Solidworks 3D CAD

Solidworks offers three different versions of its CAD software. The standard edition includes 3D design features for creating parts, assemblies and drawings. The premium and professional versions then include some advanced collaboration and simulation options to take those designs to the next level. Pricing is customized based on each company’s needs, so you need to contact the team directly to determine the cost and features you need. A free trial period is also available.


Vectorworks offers a number of different software options for different types of design and products, ranging from architecture to structural design. So you can check out the different options and find the one that best fits with your business’s niche. The company also offers mobile solutions and a trial version.


FreeCAD is a product design and modeling platform that is, as its name suggests, free. It’s an open source tool with multi platform reads and open file format options. Since it’s so customizable, it takes a bit of tech knowledge to navigate, but the price allows you to at least try it out without making a major upfront investment.

Creo Parametric 3D Modeling Software

A 3D CAD tool specifically made for product development, Creo Parametric offers both design and automation features intended to help product makers bring their ideas to market faster. You can use it for everything from framework design to sheetmetal modeling. There’s a free trial with customized pricing options available afterward.

TurboCAD Deluxe 2018

The latest version of TurboCAD, this option includes both 2D and 3D design options. The 3D design capabilities allow you to create realistic renderings of products perfect for those who need to present new product ideas. It does also include some architectural features as well as 3D printing capabilities. Priced at $149.99, there’s also a free trial option available.


Shapr3D is a tool for iPad Pro and Apple Pencil. It’s not as fully featured as some other 3D CAD tools. But for small manufacturers that prefer working on a tablet or using 3D printing, it can be a unique and cost effective option. The Pro version is $300 per year, and there’s also a free option available for beginners if you’d like to play around with the technology.


For programmers or those with a knowledge of coding, OpenSCAD offers a free and open solution for creating 3D designs and models. It’s available to download for Linux/UNIX, Windows and Mac. It offers a variety of features that focus more on the CAD aspect than design.


SolveSpace is another free offering that allows you to create digital models of 3D products. It gives you the ability to set dimensions, create 3D shapes, analyze measurements and export the designs. It is an open source tool and offers a constraint-based modeling feature and simulation capabilities for Windows, Mac and Linux users.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Eu-startups


As our PHP Tech Lead, you seamlessly switch between being a passionate coder and a leader who can scale up our development team and help take Scribbr to the next level.

We are constantly seeking ways to improve our services through technology. Our goal is to build a strong, efficient and fun development team that can get the job done, using the best, cutting-edge techniques.

Our tech stack: PHP 7.1, Symfony 3, Doctrine, Vagrant, Redis, Selenium, Scrutinizer, PHPStan, PHPUnit, Gitlab, Heineken, TDD, DDD and CI/CD.

Your responsibilities

  • You will help develop one of the most effective and ambitious dev teams in Amsterdam!
  • You will play a key role in determining the direction for Scribbr’s development team. After discussion with the development team and the co-founders, you have the final say.
  • You will suggest improvements to processes, technologies and interfaces to enhance the team’s effectiveness.
  • You are responsible for infrastructure stability, monitoring and recovery. If for some reason, we don’t have a TV with the right server stats, then you will help us get one ASAP
  • You will work closely with the product owner and business to ensure we work on issues that deliver business value and improve the codebase.
  • You will be a leader in discussions, keeping them concise and effective.
  • You will help mentor and lead the efforts of the team in addition to doing “heads-down” coding.
  • It’s your job to make the development team’s output predictable.
  • Your job also includes preventing technical debt and keeping the application future-proof.
  • As a team leader, you’ll play a key role in hiring new developers to join our growing team.

What are we looking for?

  • An experienced senior PHP developer with strong leadership skills.
  • Experience with DDD and commitment to the SOLID principles.
  • Experience with Symfony or similar frameworks.
  • Experience with improving code and architecture. You know what to improve first, how and when.
  • Excellent interpersonal and communication skills.
  • You are used to Dutch culture (or are Dutch).
  • Experience with Agile software development.
  • Eager to learn about the business. Having a business/entrepreneurial mindset is a big plus.
  • Passion for understanding emerging technologies with pragmatic insight into where those technologies can be integrated into business solutions.

What do we offer?

  • Complete autonomy. Do we need to refactor? Go for it!
  • A role on the management team and room to grow into the role of CTO.
  • A fun, ambitious, informal work environment that embraces the latest technologies
  • 25 paid holidays!
  • A generous tech budget to upgrade your gadget collection.
  • And of course… Friday beers, BBQs, a Scribbr boat docked outside the office, smart and young colleagues, free lunch, etc.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Cxotoday


The industry of Computer Software is constantly changing and we have nearly crossed midway in 2018. To understand these technologies in the competitive market, and how they impact businesses, companies need to follow the main technology trends suitable for their industry and adapt to its changes. Why is it important? It helps to find more clients, provide the services that they’ve never seen before, and increase the profit.

The most notable industry and market trends are connected with the Artificial Intelligence, AR and VR, Big Data analysis, and improving the customer service. Corporations are getting more and more tools for analyzing their clients to fulfill their needs.

Blockchain technologies will also be a major trend. They become popular with the help of cryptocurrencies and many engineers still see a big future for using blockchain technologies in different industries.

AI And Machine Learning

The number of projects connected with AI investigations is twice as much as last year. It is developing quite rapidly. Big corporations make more and more investments in their research of AI and machine learning, and the number of successfully run AI projects should increase fast.

These systems have many advantages and we have to notice the ability to learn. Many search engines are already working by using AI and machine learning technologies and they bring much more effective results in shorter terms in comparison with usual software.


For decades, people try to automate routine processes to reduce expenses and do more interesting and less difficult jobs. There is nothing strange about this. Now there are even more possibilities to let the software do difficult tasks in a much shorter time than people do.

When working in online markets, companies automate the process of recovering password, responding to clients, processing their requests. This makes the whole workflow much simpler and faster. If we use AI and Big Data technologies, we can also automate personalized mailings and responding to clients’ letters.

Using Big Data

Companies will gather more and more Big Data about their clients and products. Why do they need this? With the powerful analysis tools, it will help them to predict the behavior of their clients, understand their needs, and provide better service.

When needing more space for data storing and processing, companies will spend much more on software and hardware for it. Different companies will be working on their projects developed according to their objectives.

Transparency And Communication

If earlier many teams were isolated from each other, now there is time to work together. When working on a common project, teams will communicate more with other teams and make the development process more open and transparent. Teamwork is becoming a new big trend.

Project managers understand that sharing the latest software technology and experience helps to reach objectives fast and get better results. That is why they support the communication between different teams and departments in companies.


We read about hacker attacks, data leaks, and stolen information almost every day. Such mistakes cost much for many companies. They will invest more in their security to reduce the number of attacks and leaks so it helps them to save much money.

Companies are going to use more professional security software. They also will work more on protecting the data of their clients. When more and more transactions are made online, it requires developing software for much secured payments – after all, today people buy everything online starting from goods to purchasing essays from an affordable custom essays writing service.

Marketing Technologies

People are bored by old advertisement technologies. They use AdBlock and other tools if they don’t want to see unwanted ads when they look for what they need. That is why companies try to use software to make their marketing technologies more original.

Many branding companies go online and start raising bigger profit by selling their products on the Internet. They run the clever marketing strategies using social networks, Google SEO instruments, other tools. That is why companies should look for original ways to do their marketing.


By using different tools, companies can measure different parameters of their service. How fast they provide their services, how much they spend on it, and how much are their clients satisfied. Then they analyze their results and see what they can improve.

Measurement helps to find out what they are doing right and wrong, what are their weak sides, and what they can do to gain more profits. There are different ways to gather such information, for example, customer surveys and other tools.


Every client wants to see what is interesting for them – if they get information that is not relevant to them, they get bored and unsatisfied. That is why companies will work more on personalization of customer service.

When working with their clients, companies should use these client’s names, personalized search results, and offers, based on interests and requests of these clients. People pay much more attention to things that are interesting to them.

Different Platforms

Clients prefer to use services on different platforms. For example, they can create a file by using desktop software, edit it later in a web application, and then work with it by using a mobile device. It makes companies develop their services for different platforms.

If an enterprise develops its application, there should be versions for most of the desktop and mobile platforms. It may be difficult but clients don’t like to work with software that doesn’t work with specific devices.


This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Theregister


If you want a vision of the future of software creation, imagine a boot process spinning up a server, forever.

Speaking at Continuous Lifecycle London* on Wednesday, Mike Roberts, co-founder of consultancy Symphonia Cloud, employed less Orwellian terminology for tomorrow: continuous experimentation.

Perhaps you’ve heard of continuous delivery, the trending software engineering practice that aims to accelerate development cycles while making them simultaneously speedy and boring.

Continuous experimentation takes that a step further by reducing development cycles to weeks, days or even hours while making the data derived into fuel for further innovation.

“There’s no point in having continuous experimentation if we’re not continuously learning,” Roberts said.

Over the past 20 years, he explained, “the lead time for delivering software has come down significantly. It used to take years. Now it’s more like weeks. We have the opportunity to bring this lead time down further to days or hours.”

Echoing arguments made earlier in the day by conference presenter Linda Rising, Roberts urged businesses to reorganize their IT operations around technology and management processes that enable the rapid testing of ideas.

“Most of our ideas suck,” he said, attributing the quote to software consultant Jeff Patton (though any cynic, unbidden, will say as much).

“But some of them are amazing,” he added. “If we can try enough of these ideas out, we can play a numbers game. We can find that ideas that will really help our customers.”

Enterprises, Roberts insisted, have to start thinking about how they can embrace experimentation as their core way of working, something many startups have done.

That’s not an easy task. Roberts enumerated a number of obstacles that bar the way to reaching the dream state of frictionless, perpetual innovation.

There are technical challenges: minimizing incidental complexity; reducing infrastructure costs and commitments to new systems; avoiding the technical tar pit formed by a fragmented ecosystem.

Roberts favors serverless technology for overcoming these issues and contends it has as much value for accelerating development as it does for reducing expenses.

“If you’re only using serverless for the cost savings, I think you’re missing a big trick,” he said.

There are organizational and cultural challenges, too: Minimizing the time from code completion to code deployment; nurturing a culture of learning; and spreading ownership of product innovation across teams.

“When we treat engineers as just code robots, we’re not really releasing their full potential,” he said.

And there are safety challenges: making it safe to fail; ensuring budgets don’t get busted along the way; and protecting data and security.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Sdtimes


In the modern economy, every business is a software business. Why?  Because companies have figured out that the easiest way to introduce innovation into a marketplace is with software, whose defining characteristic is that, well, it’s soft.  Instead of waiting months to make a change to some physical piece of equipment, software can be distributed quickly and widely to smartphone users, programmable manufacturing equipment, and a wide variety of other destinations. This is why you can Google just about any company of any size and find they have openings for software engineers.

But if you’re spending all that money on developer salaries, how do you maximize the amount of innovation you can get out of them?  It turns out, iterations are the currency of software innovation.

It’s all about the at-bats
Venture capitalists are in the business of finding innovation and most of them will tell you that for every 10 companies they invest in, they are happy if one hits it big.  Among the things that public cloud did for the VC community is let them take more swings of the bat by funding more companies at a lower capitalization because those startups could begin without having to purchase hardware.  More at-bats, to continue the baseball analogy, resulted in more innovation.

Applying that same hit percentage to software development, companies have a 10% chance of any release containing an innovation that will stick with its intended audience, so is it better to have four chances at innovation a year with quarterly releases, 12 chances with monthly releases, or 52 chances with weekly releases?  The strategic answer is obvious.  More releases, more iterations of software, produces more chances at innovation.  Tactically, though, how do you do that?

Maximizing iterations: From monoliths to microservices
In the early 1990s when most software was running in data centers on physical hardware, iteration speed was trumped by risk mitigation.  Back then, those physical servers had to be treated like scarce resources. That’s  because they were the only way to make a unit of compute accessible to run a software stack, and to replace that unit of compute took months.  Components of a monolithic application most often communicated with each other within the same memory space or over client/server connections using custom protocols.  All the pieces were typically moved together into production to avoid as much risk as possible, but the side effect of that was that if one component had issues, the entire application had to be backed out, which limited iteration speed further.

But virtual machines can be created in minutes and containers can be created in seconds, which changed the way that developers thought about application components.  Instead of relying on in-memory or custom protocol communication, if each component had an HTTP-based API it could act as a contract between the components.  As long as that contract didn’t change, the components could be released independent of one another. Further, if every component could sit behind its own load balancer it could also be scaled independently, in addition to taking advantage of rolling deployments where old instances of components are removed from behind the load balancer as new ones are injected.

These are the modern tenets of a microservices-based architecture, which are more loosely coupled thanks to those API contracts than their monolithic predecessors, enabling faster iterations.

Kubernetes is a big deal, and so is serverless
But now if you have hundreds or thousands of containers to manage for all these microservices, you need a way to distribute them across different physical or virtual hosts, figure out naming and scheduling, and improve networking because different components might be on the same host, negating the need for packets to go out to the network card.  This is why Kubernetes is such a big deal and why Google (through GKE), AWS (through EKS), and Cisco (through CCP), among others, are so bought into the container clustering platform.  And again, it’s all in the name of iterations, so that development teams can more loosely couple their components and release them faster as a way of finding innovation.

But what’s next?  The big deal over serverless architectures is that they could be the next step in this evolution.  Instead of coupling components via API contracts, serverless functions are tied together through event gateways. Instead of having multiple instances of a component sitting behind a load balancer, functions sit on disk until an event triggers them into action.  This requires a far more stateless approach to building the logic inside the individual functions but is an even looser coupling than microservices with potentially better underlying physical server utilization, since the functions are at rest on disk until necessary.

The bottom line
The bottom line is that the best way to find a good idea is to iterate through ideas quickly and discard the bad ones once you’ve tried them.  This concept is driving application architecture, container clustering platforms, and serverless approaches in an attempt to remove as much friction from the software development and release processes as possible.  The potential innovation gains from maximizing iterations are what just about every company is chasing these days and it’s all because iterations are the currency of software innovation.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Quantamagazine


Do genes behave like lines of computer code? Our April puzzlediscussed ways in which genes hold true to this analogy: They have control structures commonplace in computer programs, such as “if-then” logic, “do loops,” timing routines and cascading “subroutine calls.” We also listed some ways that DNA programs differ from ordinary computer programs: Genes program three-dimensional structures from scratch in a water-based medium, using massive parallelism and swarm programming while making use of, and being constrained by, the laws of physics and chemistry.

There’s another important way that genes behave differently from computer code and mathematical models generated by humans. In a nutshell, biology is extremely complicated and messy. Unlike a well-documented and logically organized computer program that a professional software engineer would write, evolutionary algorithms were not written with human understanding in mind. DNA logic, which can seem like the original “spaghetti code,” is exceedingly difficult to follow in its details. This difference between human-generated and evolution-generated code also holds for mathematical models applied to biology. Unlike the smooth, elegant models that work in arguably simpler sciences like physics, sudden discrete changes and chaotic nonlinear functions in biological situations are very difficult to predict mathematically. While we can understand and appreciate general principles in these problems, we can’t escape the complications that lie just below the surface.

Problem 1

In this problem you have to figure out the details of a hypothetical scenario in which a growing embryo can initiate the formation of two bony rods in the middle of its body using morphogens. Imagine a rectangular sheet consisting of 101 vertical columns and 200 horizontal rows of identical round cells lined end to end. The cells along the left edge (in column 0) can sense that they are on the edge, and can activate genes to release three different morphogens, A through C, in different concentrations. Each morphogen achieves its highest concentration at the left edge of the sheet, but each diffuses at different rates, so that the concentrations of A, B and C respectively at the right edge are 0.1, 0.2 and 0.4 times their left-edge concentrations, with a uniform gradient in between. Each cell in the sheet is programmed to make three pairs of molecules — one pair for each morphogen. Each pair consists of a “bone initiator” molecule and a “bone suppressor” molecule. These molecules get switched on or off based on the concentration of their particular morphogen as shown in the table below. Thus, morphogen A’s bone initiator becomes active when the concentration of A is at or below 0.64 units/ml, whereas A’s bone suppressor becomes active when A’s concentration falls below 0.46 units/ml. The bone initiators and suppressors related to the other morphogens function similarly, but at different concentrations of their morphogens as shown below. Each bone suppressor, when active, completely blocks the action of its corresponding bone initiator. Bone is laid down when at least two bone initiators are active in a given cell, without being blocked by their corresponding suppressors.

Morphogen Bone initiator requires Bone suppressor requires
A <= 0.64 units/mL < 0.46 units/mL
B <= 6.8 units/mL < 6.1 units/mL
C <= 2.8 units/mL < 2.6 units/mL

What concentrations of the three morphogens at the left edge would make columns 40 to 45 and columns 55 to 60 of the cells lay down bone in response to two active bone initiators, while no other cells lay down bone? Of the several concentrations that can work, which ones might be least prone to development errors in response to small random fluctuations in morphogen concentrations?

To allow only the specified cells to have two active unsuppressed bone initiators, we need the following things to be true.

i. One of the morphogens must reach its bone initiator threshold at the 40th column and must reach its bone suppressor threshold at the 61st column.

ii. The other two morphogens must behave as follows. One of them must activate its initiator sometime before the 40th column and activate its suppressor exactly at the 46th column; the other must activate its initiator exactly at the 55th column without activating its suppressor until after the 60th column.

Condition (i) is quite stringent, and only morphogen A can meet it. The difference between its bone initiator and suppressor threshold of 0.18 units/mL over 20 columns is exactly what is required for the right edge concentration to become 0.1 times the left edge concentration as specified. This gives us a consistency check that morphogen A passes, but morphogens B and C do not.

Condition (ii) is more flexible, and both morphogens B and C can meet it. Thus, we can use morphogen B for the left bone and morphogen C for the right as Alexandre did, or the other way around as Ed did. You can solve for the left-edge concentrations using the equation given by Alexandre, or use a spreadsheet as Ed did. Each of these two choices works for a range of left-edge concentrations of the two morphogens. For example, in the first case, the lowest left-edge concentration of B that works is 9.532. The concentration dips below 6.8 (the initiator threshold) well before the 40th column, stays just slightly above 6.1 (the suppressor threshold) at column 45, and dips comfortably below it at column 46. On the other hand, the highest possible left-edge concentration that works is 9.651. The concentration gradient behaves similarly, but this time just barely dips below the suppressor threshold at column 46.

The left-edge concentration ranges that work for the two cases are as follows:

A = 1 unit/mL, B = 9.532 to 9.651 units/mL, C = 4.142 to 4.179 units/mL

A = 1 unit/mL, B = 11.972 to 12.142 units/mL, C = 3.562 to 3.591 units/mL

The middle of the range concentrations, 9.59 and 4.16 for B and C in the first case, and 12.06 and 3.58 in the second, are the ones that would be least prone to developmental errors caused by random fluctuations in morphogen concentrations, because they provide a “cushion” of a few fractions of a unit on both sides at the transition columns, 45 to 46 and 54 to 55. At the extreme ends of the above ranges, random fluctuations could fail to initiate bone formation at the right column or spillover to adjacent columns making the bones thicker or thinner than they need to be. In this connection, the figures I provided for morphogen A to make things simple are much too tight to be biologically plausible. More realistic concentrations in the first line of the table should have been <= 0.645 and < 0.455.

Note that the above example uses morphogen concentrations to produce exact patterns despite small random fluctuations, whereas the Turing mechanism that seems to be operative in producing zebra fish stripes magnifies such fluctuations to produce unique, random patterns. I suspect that the former mechanism operates when an exact, symmetric pattern is desired, such as the complicated eyespots on butterfly wings, whereas the latter mechanism is used in unique, nonsymmetrical patterns such as human fingerprints.

Our second problem was based on a 2015 Quanta article about the biologist Jennifer Marshall Graves’ prediction that the human Y chromosome could disappear in future evolution. This prediction is based on the following information in the article: “In the last 190 million years, the number of genes on the Y has plummeted from more than 1,000 to roughly 50, a loss of more than 95 percent. The X chromosome, in contrast, still has roughly 1,000 genes.”

Problem 2

There are two schools of thought about whether or not the Y chromosome will disappear in the distant future. This problem examines the merits of the two arguments.

First, what will be the fate of the Y chromosome if you linearly extrapolate the loss of genes (1,000 to 50 in 190 million years)?

On the other hand, the Y chromosome has lost none or almost no genes in the last 25 million years (different estimates say 0 to 3), leading some scientists to argue that the deterioration of the Y has stopped. Perhaps there is a sizable advantage to keeping all the male-producing genes together in a neat “code module”! Assuming a constant probability for the disappearance of Y-chromosome genes over time, what is the probability that the pause or marked decrease in the loss of just 0 to 3 genes over the last 25 million years is due to chance?

As Alexandre commented, the Y chromosome’s rate of loss of genes is 950/190 = 5 genes every million years. If we do a linear extrapolation at this rate, the remaining 50 genes should disappear in just 10 million years. The fact that only 0 to 3 genes have been lost in the last 25 million years means that the loss of genes on the Y chromosome has indeed slowed dramatically. You would expect 1 gene to be lost every 0.2 million years, which would allow as many as 125 genes to be lost in 25 million years. In order to estimate how unlikely it is to have lost 0 to 3 genes, we can use the Poisson distribution formula Alexandre provided, setting the expected value (l) to 125 genes in the formula to obtain a probability of 1.7 x 10-49. Alternatively, you can use an online Poisson distribution calculator with the upper limit of genes lost (x) = 3 and expected number lost (λ) = 125 to get this answer. This probability is extremely tiny — indeed, it is 0 for all practical purposes. The linear model of Y-chromosome gene loss is obviously wrong.

Should we try a different mathematical model, such as the exponential one that Alexandre tried (and which gave a similar result)?

Not really — as I said, biology is far too complicated and messy. The idealized mathematical distributions we are familiar with simply do not model the real situation here. What actually happens to cause gene loss is based on rare, discrete, low probability events that have to be fixed in the entire population. On very rare occasions, chunks of DNA of different sizes break off and get stuck on other chromosomes (a “translocation”). This has to happen just right, so that the change does not disrupt the cell’s very complex machinery (a “balanced” translocation). While this is extremely unlikely, it can certainly happen — even for regions as large as entire chromosomes. There was a report in fact about a seemingly normal man in China with only 22 pairs of chromosomes instead of the usual 23. One of his chromosomes (not the Y) got stuck to another. But even such a rare event is not enough by itself — the change has to be advantageous enough to spread to the rest of the population, which is a very unpredictable process. Again, these things can happen — in fact, it has happened at least once within the last 7 million years in human evolution for a different chromosome. We know this because our closest primate relatives have 24 pairs of chromosomes (one pair more than we do). A very complex mathematical model could, in theory, be created to describe this translocation process and its probability of success and fixation, but in practice it wouldn’t work. It would need much greater quantitative understanding of extremely low probability processes.

So much for mathematical modeling. Perhaps more relevant, the primate Y chromosome has evolved sophisticated mechanisms using palindromic regions to repair defective regions and therefore keep backup copies of key genes — the lack of which is one of the reasons the Y is supposed to have lost genes in the first place. This may be another reason why gene loss has slowed or stopped, in addition to the fact that genes for maleness all need to be together, and translocations of small numbers of genes away from the Y can no longer be successful.

So it seems the Y chromosome may be safe. Even if it is not, and all its genes get translocated en masse to another chromosome in a single unlikely event, its genes will continue to make males. If they do not, this change will not survive. There are indeed many other ways to create males in nature even in the absence of a Y chromosome, as Emily Singer has reported in Quanta. The Y chromosome method happens to be what our primate lineage ended up with.

As we’ve seen, although the programming analogy has merit, the biological details are far more complicated than we would predict from a logical programming perspective (such as the one Alan described). Nature’s programs are not designed to be easy for humans to understand. Processes like evolutionary algorithms and artificial life have taught us that programs designed by evolution are extremely complicated and sometimes seem to defy logic, but they work. The decompilation of the human genome that I alluded to in the original puzzle column will not be easy.

This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.


Credits : Globenewswire


AxiHub (OTC: JZZI) has engaged Valuesetters to provide $225,000 in software development services over the next 12 months to develop the following platforms/applications:

  • Axi-Financial Web application
  • Axi-Crypto Web application
  • Axi-Hub website

Valuesetters has been tasked with the creation, and maintenance of, software for the web and mobile applications for the various AxiHub websites, along with their content for the Axi-Hub platforms. The platform will allow users and various traded companies to take advantage of the software’s capabilities. Among these is the ability to discuss and access information on various Companies as well as the public markets generally. The content on the message board, as well as the users profile, links to the ChoiceTrade trading platform where users can trade the stocks.  ChoiceTrade is also a ValueSetters’ client.

The software development team plans to build cutting-edge web platforms, websites and mobile applications designed to drive customer engagement, retention and increase customer satisfaction and experience when using the Axi-Hub cloud-based services.

The Axi-Financial and Axi-Crypto Software Platform Launch is designed for Equities, Options, Forex, and all the top Cryptocurrencies. Users will be able to open and fund an account seamlessly, and trade thru an exclusive brokerage marketplace with smart order routing and access to over 20 plus exchanges covering all aspects and needs of this fragmented marketplace. We have been working closely with many industry specialists to best understand the needs of the investor and believe we will provide the best of Price, access, and execution.

The contract calls for a unique and easy-to-navigate platform on the web and mobile devices, which will allow for uninhibited traffic growth and an ideal user experience.  The AxiHub Web applications platform is planned to be 100% innovative, unique and engaging for all parties involved. The web and mobile applications will be developed with the purpose of allowing non-technical users to visit and use the platform from their device and manage all aspects of the experience. Axi-Financial and Axi-Crypto are the first of several platforms that are planned for AxiHub, with a plan to rollout other sites based on user interest and activities.

About ValueSetters: (OTC: VSTR) Led by a team of professional investors and technology specialists, ValueSetters is a publicly- traded boutique advisory firm with unique expertise in helping early stage companies raise capital over the internet. The company also provides technology consulting services as well as strategic advice to help companies grow and evolve to meet the challenges of today’s marketplace.

About AxiHub: (OTC: JZZI) AxiHub’s first mobile app brings together information from email, social media, blogs and message boards into one easy to use platform, utilizing sophisticated technology, based upon the users’ interests and activity. Users can create multiple profiles, quickly and easily.

The information contained herein includes forward-looking statements. These statements relate to future events or to our future financial performance, and involve known and unknown risks, uncertainties and other factors that may cause our actual results to be materially different from any future results, levels of activity, performance or achievements expressed or implied by these forward-looking statements. You should not place undue reliance on forward-looking statements since they involve known and unknown risks, uncertainties and other factors which are, in some cases, beyond our control and which could, and likely will, materially affect actual results, levels of activity, performance or achievements. Any forward-looking statement reflects our current views with respect to future events and is subject to these and other risks, uncertainties and assumptions relating to our operations, results of operations, growth strategy and liquidity. We assume no obligation to publicly update or revise these forward-looking statements for any reason, or to update the reasons actual results could differ materially from those anticipated in these forward-looking statements, even if new information becomes available in the future.


This article is shared by | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.