Credits : Healthdatamanagement

 

In the recent report entitled, “100 Data and Analytics Predictions Through 2022,” Gartner analysts offer their views on how data management and analytics trends will evolve over the next five years, and how those trends will impact software development. The report was prepared by analysts Douglas Laney, Guido De Simoni, Rick Greenwald, Cindi Howson, Ankush Jain, Valerie Logan and Alan Duncan.

Application development predictions

Gartner analysts broke out their predictions in the area of software to two main themes—application development and enterprise application software. Four trends will dominate application development, they say.

Virtual codevelopers

“By 2022, at least 40 percent of new application development (AD) projects will have virtual AI co-developers on their teams,” Gartner says.

AI-enabled test set optimizers

“By 2022, 40 percent of AD projects will use AI-enabled test set optimizers that build, maintain, run and optimize test assets,” according to the Gartner report.

Hosted AI services

“By 2022, 30 percent of AD projects will incorporate hosted AI services; fewer than 5 percent will build their own AI models,” Gartner analysts predict.

Event-driven business process management

“By 2022, 50 percent of digital business technology platform projects will connect events to business outcomes using event-driven intelligent business process management suite (iBPMS)-oriented frameworks,” Gartner says.

Enterprise application software predictions

“The enterprise application market is again reinventing itself, headlined by the use of AI, conversational platforms and the exploitation of business network data. Technology business unit leaders need to prepare for new monetization tactics and new competitors through 2022,” the Gartner analysts say. They offered the following three predictions in this area.

Artificial intelligence and recruiting

“By 2021, 30 percent of high-volume recruiting activities (sourcing, screening, shortlisting and candidate interaction) will be done without human intervention, using innovative applications based on AI and data as a service (DaaS),” Gartner says.

Ubiquitous intelligent applications

“By 2022, ‘intelligent’ applications will be ubiquitous, but their usage for managing complex and custom processes will be less than 5 percent,” Gartner predicts.

Real-time analytics

“Between 2016 and 2019, spending on real-time analytics will grow three times faster than spending on non-real-time analytics,” the Gartner analysts predict.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Javaworld

 

 

The Java Development Kit (JDK) is one of three core technology packages used in Java programming, along with the JVM (Java Virtual Machine) and the JRE (Java Runtime Environment). It’s important to differentiate between these three technologies, as well as understanding how they’re connected:

  • The JVM is the Java platform component that executes programs.
  • The JRE is the on-disk part of Java that creates the JVM.
  • The JDK allows developers to create Java programs that can be executed and run by the JVM and JRE.

Developers new to Java often confuse the Java Development Kit and the Java Runtime Environment. The distinction is that the JDK is a package of tools for developing Java-based software, whereas the JRE is a package of tools for running Java code.

The JRE can be used as a standalone component to simply run Java programs, but it’s also part of the JDK. The JDK requires a JRE because running Java programs is part of developing them.

Figure 1 shows how the JDK fits into the Java application development lifecycle.

jw whatisjdk fig1Matthew Tyson
Figure 1. High-level view of the JDK

Just as we did with my recent introduction to the Java Virtual Machine, let’s consider the technical and everyday definitions of the JDK:

  • Technical definition: The JDK is an implementation of the Java platform specification, including compiler and class libraries.
  • Everyday definition: The JDK is a software package you download in order to create Java-based applications.

Get started with the JDK

Getting Java setup in your development environment is as easy as downloading a JDK and adding it to your classpath. When you download your JDK, you will need to select the version of Java you want to use. Java 8 is the version most commonly in use, but as of this writing Java 10 is the newest release. Java maintains backward compatibility, so we’ll just download the latest release.

JDK packages

In addition to choosing your Java version, you will also need to select a Java package. Packages are Java Development Kits that are targeted for different types of development. The available packages are Java Enterprise Edition (Java EE), Java Standard Edition (Java SE), and Java Mobile Edition (Java ME).

Novice developers are sometimes unsure which package is correct for their project. Generally, each JDK version contains Java SE. If you download Java EE or Java ME, you will get the standard edition with it. For example, Jave EE is the standard platform with additional tools useful for enterprise application development such as Enterprise JavaBeans or support for Object Relational Mapping.

It’s also not hard to switch to a different JDK in the future if you find you need to. Don’t worry too much about choosing the correct Java version and JDK package when you are just starting out.

Downloading the JDK

We’ll stick with Java SE for this tutorial, so that we can focus on the core JDK classes and technologies. To download the Java SE JDK, visit Oracle’s official download page. You’ll see the various JDK packages available, as shown in Figure 2.

Before you select the Java SE download, take a minute to look at the other options. There’s a lot cooking in the Java kitchen!

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Eurekalert

 

Cybersecurity researchers at the Georgia Institute of Technology have helped close a security vulnerability that could have allowed hackers to steal encryption keys from a popular security package by briefly listening in on unintended “side channel” signals from smartphones.

The attack, which was reported to software developers before it was publicized, took advantage of programming that was, ironically, designed to provide better security. The attack used intercepted electromagnetic signals from the phones that could have been analyzed using a small portable device costing less than a thousand dollars. Unlike earlier intercept attempts that required analyzing many logins, the “One & Done” attack was carried out by eavesdropping on just one decryption cycle.

“This is something that could be done at an airport to steal people’s information without arousing suspicion and makes the so-called ‘coffee shop attack’ much more realistic,” said Milos Prvulovic, associate chair of Georgia Tech’s School of Computer Science. “The designers of encryption software now have another issue that they need to take into account because continuous snooping over long periods of time would no longer be required to steal this information.”

The side channel attack is believed to be the first to retrieve the secret exponent of an encryption key in a modern version of OpenSSL without relying on the cache organization and/or timing. OpenSSL is a popular encryption program used for secure interactions on websites and for signature authentication. The attack showed that a single recording of a cryptography key trace was sufficient to break 2048 bits of a private RSA key.

Results of the research, which was supported in part by the National Science Foundation, the Defense Advanced Research Projects Agency (DARPA), and the Air Force Research Laboratory (AFRL) will be presented at the 27th USENIX Security Symposium August 16th in Baltimore.

After successfully attacking the phones and an embedded system board – which all used ARM processors – the researchers proposed a fix for the vulnerability, which was adopted in versions of the software made available in May.

Side channel attacks extract sensitive information from signals created by electronic activity within computing devices during normal operation. The signals include electromagnetic emanations created by current flows within the devices computational and power-delivery circuitry, variation in power consumption, and also sound, temperature and chassis potential variation. These emanations are very different from communications signals the devices are designed to produce.

In their demonstration, Prvulovic and collaborator Alenka Zajic listened in on two different Android phones using probes located near, but not touching the devices. In a real attack, signals could be received from phones or other mobile devices by antennas located beneath tables or hidden in nearby furniture.

The “One & Done” attack analyzed signals in a relatively narrow (40 MHz wide) band around the phones’ processor clock frequencies, which are close to 1 GHz (1,000 MHz). The researchers took advantage of a uniformity in programming that had been designed to overcome earlier vulnerabilities involving variations in how the programs operate.

“Any variation is essentially leaking information about what the program is doing, but the constancy allowed us to pinpoint where we needed to look,” said Prvulovic. “Once we got the attack to work, we were able to suggest a fix for it fairly quickly. Programmers need to understand that portions of the code that are working on secret bits need to be written in a very particular way to avoid having them leak.”

The researchers are now looking at other software that may have similar vulnerabilities, and expect to develop a program that would allow automated analysis of security vulnerabilities.

“Our goal is to automate this process so it can be used on any code,” said Zajic, an associate professor in Georgia Tech’s School of Electrical and Computer Engineering. “We’d like to be able to identify portions of code that could be leaky and require a fix. Right now, finding these portions requires considerable expertise and manual examination.”

Side channel attacks are still relatively rare, but Prvulovic says the success of “One & Done” demonstrates an unexpected vulnerability. The availability of low-cost signal processing devices small enough to use in coffee shops or airports could make the attacks more practical.

“We now have relatively cheap and compact devices – smaller than a USB drive – that are capable of analyzing these signals,” said Prvulovic. “Ten years ago, the analysis of this signal would have taken days. Now it takes just seconds, and can be done anywhere – not just in a lab setting.”

Producers of mobile devices are becoming more aware of the need to protect electromagnetic signals of phones, tablets and laptops from interception by shielding their side channel emissions. Improving the software running on the devices is also important, but Prvulovic suggests that users of mobile devices must also play a security role.

“This is something that needs to be addressed at all levels,” he said. “A combination of factors – better hardware, better software and cautious computer hygiene – make you safer. You should not be paranoid about using your devices in public locations, but you should be cautious about accessing banking systems or plugging your device into unprotected USB chargers.”

In addition to those already mentioned, the research involved Monjur M. Alam, Haider A. Khan, Moutmita Dey, Nishith Sinha and Robert Callen, all of Georgia Tech.

###

This work has been supported, in part, by the National Science Foundation under grant 1563991 and by the Air Force Research Laboratory and DARPA LADS under contract FA8650-16-C-7620. The views and findings in this paper are those of the authors and do not necessarily reflect the official views of NSF, DARPA or the AFRL.

CITATION: Monjur M. Alam, et. al., “One&Done: A Single-Decryption EM-Based Attack on OpenSSL’s Constant-Time Blinded RSA,” Proceedings of the 27th USENIX Security Symposium.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

 

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Telecompaper 

Software services provider Mera has announced the opening of a software development division in Vilnius. It is Mera’s third European office, following the opening of the company’s branch in Serbia and a headquarters in Switzerland in 2014 and 2016, respectively.

The opening of this R&D centre is in line with the expansion strategy of the company, which has a strong focus on Europe, where 45 percent of its revenues originate.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Gsmarena

 

Android 9 Pie is out today and it comes with Google’s first ever gesture-based navigation system. That, however, is opt-in at the moment, for devices that are getting updated to Pie. You can enable it if you want, otherwise you’re still able to use the traditional navigation with three software buttons – Back, Home, and Recents.

This situation will not remain the same going forward. For Google devices launching in the future, including the Pixel 3 and Pixel 3 XL which should be made official at an event on October 4, the traditional buttons will be gone.

Thus, the one and only option you’ll get is the new gesture-based navigation with the center pill and Back button on the left (which appears only when it’s necessary). The information has emerged through an interview by Android Central with EK Chung, Google’s UX manager for Android handheld and Pixel.

The company chose to retire the multitasking button because user diagnostics showed that very few people actually used it on a regular basis. Throughout user testing performed on the new gesture system with “normal” consumers, Google found that their most-loved feature was the ability to quickly jump between apps by sliding the pill to the right.

As is par for the course in the Android world, just because Google has its own gesture system now it doesn’t mean that the same exact one will be used by other OEMs. They are still free to ship handsets with the old three-button navigation, or even their own gestures like OnePlus and Motorola have already done. Fragmentation definitely isn’t going away anytime soon.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Analyticsindiamag

 

Can machine learning be used to accelerate the development of traditional software development lifecycle? As artificial intelligence and other techniques get increasingly deployed as key components of modern software systems, the hybridisation of AI and ML and the resultant software is inevitable. According to a research paper from the University of Gothenburg, AI and ML technologies are increasingly being componentised and can be more easily used and reused, even by non-experts. Recent breakthroughs in software engineering have helped AI capabilities to be effectively reused via RESTful APIs as automated cloud solution2s.

The ML Impact

AI will play a key role in the design, creation and testing of software. According to a 2016 Forrester Research survey, AI can also help in code generation. The survey further revealed that if an AI software system is given a business requirement in natural language, it can write the code to implement it — or even come up with its own idea and write a program for it. For example, Microsoft’s Intellisense has been integrated with Visual Studio to enhance the developer experience. In fact, a 2017 State of Testing Survey revealed that testers will spend more time and their resources on testing mobile and hybrid applications, with the time spent on actual development shrinking.

Examples Of AI Integration In The Software Development Cycle

Google bugspot tool w3C: As the blog points out, 50 percent of the code changes every month. And as Google’s code base and teams increase in size, it becomes more unlikely that the submitter and reviewer will even be aware that they’re changing a hot spot. The Bug prediction tool uses ML algorithms and statistical analysis to find out whether a piece of code is flawed or not and whether it falls in the confidence range. Source-based metrics that could be used for prediction are how many lines of code, how many dependencies are required and whether those dependencies are cyclic1, the Google Engineering blog indicates.

Stack Overflow AutoComplete: Code Complete by Emil Schutte is a good case in point where the developer leveraged Stack Exchange data to crank out fully functional code based on the intentions inferred from existing code.

Deep Code: Then there is AI programming tool developed by a Zurich based startup DeepCode which is being positioned as a new AI code assistant. The tool learns from a corpus of 250,000 rules, from the public and private GitHub repositories, thereby telling programmers how to fix their code. In simpler terms, it does a thorough code review. It is a good tool to find bugs in code and helps developers deliver clean and reliable code.

Areas Where AI Will Play A Pivotal Role

Bug fixing: This is one of the biggest areas being revolutionised with AI technologies. Given the huge volume of data that needs to be tested and human error due to overlooked bugs, software testing tools such as bugspots show us that programs can leverage AI algorithms to auto-correct themselves with minimum intervention of a human programmer.

Code Optimization: Compilers fix old code without needing the original source, that too in a short period of time. Compilers are programs that process high-level programming language and convert it into machine language or instruction1s that can be performed by machines. For example, the Helium software developed by Adobe and MIT Computer Science and Artificial Intelligence Laboratory automated the task of fixing old code, without requiring the original source. It thereby made the next generation of code faster. This task which would take an engineer up to three months or more, was reduced to mere days. The Helium software was used to optimise the performance of Photoshop filters by up to 75 percent.

Testing: AI-driven testing has been around for some time and there is a slew of open source tools that use AI for generating test cases and perform regression testing. For example, Appvance, pegged as an AI-driven software test automation tool uses AI for performance and load testing and to generate test cases based on user behaviour. Meanwhile, Testim.io deploys machine learning to accelerate authoring, execution, and maintenance of automated tests. As one user points out, the tool becomes smarter when more tests are run. Then there is Functionize the machine learning based testing platform which uses ML for functional testing for web and mobile applications, thereby reducing the time to manage test infrastructures.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Latestindustrynews

 

In this Motion Control Software Market research report, the major factors driving the growth of this market were documented and the business partners & end operators were long-winded. The configuration of the business division, examples and complications manipulating the market internationally are similarly a piece of this broad analysis. Numerous gatherings and meetings were led by the noticeable pioneers of this industry to get steadfast and refreshed insights concerned to the market.

The assembling business is developing quickly and there is an expanded need to improve the assembling forms. The establishment of cutting edge mechanization frameworks, for example, movement control framework helps in lessening surrenders, expanding process representation, and enhancing process productivity. Process automation helps in settling the mind boggling undertaking of examining constant information and in lessening human intercession and the time taken for operational procedures.

The incorporation of remote innovation and remote gadgets have profited the movement control programming market as information can be recovered from every one of the gadgets on the up and up and can be put away in distributed storage. Expanded network amongst equipment and programming enhances the proficiency of the modern procedures and enhances the vital basic leadership capacity of the clients. Merchants are creating remote inserted chips that can be introduced in engines and are perfect with movement control programming. Cloud innovation helps in information sharing as the information can be recouped and gotten too safely. This will expand the requirement for cloud-based and remote innovation which will be one of the key patterns contributing towards the development of the movement control programming market.

The leading vendors in the market are –

ABB

Moog

National Instruments

Physik Instruments

Rockwell Automation

The other prominent vendors in the market are SIGMATEK, LINAK, 3S-Smart Software Solutions GmbH & CODESYS, Mitsubishi Electric, Galil, Trio Motion Technology, and Siemens.

The report segments the Motion Control Software market on the basis of key criteria and studies each of the segments along with their sub-segments in a detailed manner. Revealing the top segment, the segment with sluggish growth, and also the fastest growing segments, the report proves to be valuable for those wishing to invest in the global Motion Control Software_ market. Readers are able to make correct and smart decisions regarding investments in this market, thereby making profits and securing a strong foothold in the market in the future.

A detailed overview of key market drivers, trends, restraints and analyzes the way they affect the Motion Control Software_ market in a positive as well as the negative aspect. The regions which are covered in this report are North America, Europe, Asia Pacific, Middle East & Africa and Latin America. Considering the given forecast period and precisely studying each and every yearly data, a report is been drafted to ensure the data is as expected by client. A detailed study of the competitive landscape of the global Motion Control Software market have been given, presenting insights into the company profiles, product portfolio, financial status,  recent developments, mergers and acquisitions, and the SWOT analysis.

 Segmentation by product type and analysis of the motion control software market

 Robotics

Material handling

Semiconductor machinery

Packaging and labeling machinery

The motion control software is primarily used in robotics as the software eases the motion control process, provides assistance to robots by offering real-time instruction, and increases the accuracy of angular movements of robots. Additionally, the software also provides precise algorithms and inputs that increases the speed and efficiency of the robots.

Geographical segmentation and analysis of the motion control software market

Americas

APAC

EMEA

The main points which are answered and covered in this Report are-

What will be the total market size in the coming years till 2023?

What will be the key factors which will be overall affecting the industry?

What are the various challenges addressed?

Which are the major companies included?

Table of Content:

Global Motion Control Software Market Research Report 2018-2023

Chapter 1: Motion Control Software Market Overview

Chapter 2: Global Economic Impact

Chapter 3: Competition by Manufacturer

Chapter 4: Production, Revenue (Value) by Region (2018-2023)

Chapter 5: Supply (Production), Consumption, Export, Import by Regions (2018-2023)

Chapter 6: Production, Revenue (Value), Price Trend by Type

Chapter 7: Analysis by Application

Chapter 8: Manufacturing Cost Analysis

Chapter 9: Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter 10: Marketing Strategy Analysis, Distributors/Traders

Chapter 11: Market Effect Factors Analysis

Chapter 12: Market Forecast (2018-2023)

Chapter 13: Appendix

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Siliconangle

 

Google LLC today released Jib, a new open-source tool that aims to make software containers and the Java programming language work more seamlessly together.

The two technologies are both mainstays of application development in the enterprise. Java has been used to write business software for decades and remains ubiquitous to this day. Software containers are a popular means of building portable applications that work across different kinds of infrastructure.

Google has created Jib to take the hassle out of packaging Java code into containers. This task is a tedious, multistage process when performed the traditional way. A developer has to install and run the Docker container engine, write a set of instructions known as a Dockerfile to define how an application should be built and then push the finished container image to a repository for safekeeping.

Jib consolidates the process into a single step. According to Google, the tool doesn’t require users to install Docker and can figure out how to build an application without needing any specially crafted instructions. Instead of a Dockerfile, the software analyzes project data from the user’s development environment.

According to Google, Jib also uses the collected information to organize software components into “layers.” When a developer updates their application, the tool only rebuilds the relevant layer instead of the entire code base to reduce build times.

“Jib takes advantage of image layering and registry caching to achieve fast, incremental builds,” explained Google engineers Appu Goundan and Qingyang Chen. “It reads your build config, organizes your application into distinct layers (dependencies, resources, classes) and only rebuilds and pushes the layers that have changed. When iterating quickly on a project, Jib can save valuable time on each build by only pushing your changed layers to the registry instead of your whole application.”

By making it easier for Java developers to use containers, Jib could help further expand the adoption of the technology in the enterprise. The tool is the latest addition to the already extensive portfolio of open-source projects that Google has built up in recent years. The company’s most well-known contribution to the container ecosystem is Kubernetes, the go-to software for managing large Docker clusters.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Infoq

 Key Takeaways

  • In enterprise test scenarios, software needs to be tested in the same way as it will run in production, in order to ensure that it will work as expected.
  • A common challenge is that microservice applications directly or indirectly depend on other services that need to be orchestrated within the test scenario.
  • This article shows how container orchestration provides an abstraction over service instances and facilitates in replacing them with mock instances.
  • Additionally, service meshes enable us to re-route traffic and inject faulty responses or delays to verify our services’ resiliency.
  • The article contains sample code from an accompanying example Java-based coffee shop application deployed to and tested on Kubernetes and Istio.

In enterprise test scenarios, software needs to be tested in the same way as it will run in production, in order to ensure that it will work as expected. A common challenge is that microservice applications directly or indirectly depend on other services that need to be orchestrated within the test scenario.

This article shows how container orchestration provides an abstraction over service instances and facilitates in replacing them with mock instances. On top of that, service meshes enable us to re-route traffic and inject faulty responses or delays to verify our services’ resiliency.

We will use a coffee shop example application that is deployed to a container orchestration and service mesh cluster. We have chosen Kubernetes and Istio as example environment technology.

Test Scenario

Let’s assume that we want to test the application’s behavior without considering other, external services. The application runs in the same way and is configured in the same way as in production, so that later on we can be sure that it will behave in exactly the same way. Our test cases will connect to the application by using its well-defined communication interfaces.

External services, however, should not be part of the test scenario. In general, test cases should focus on a single object-under-test and mask out all the rest. Therefore, we substitute the external services with mock servers.

Container Orchestration

Reconfiguring the application to use the mock servers instead of the actual backends contradicts the idea of running the microservice in the same way as in production, since this would chance configuration. However, if our application is deployed to a container orchestration cluster, such as Kubernetes, we can use the abstracted service names as configured destinations and let the cluster resolve the backend service instances.

The following example shows a gateway class that is part of the coffee shop application and connects against the coffee-processor host on port 8080.

public class OrderProcessor {

    // definitions omitted ...

    @PostConstruct
    private void initClient() {
        final Client client = ClientBuilder.newClient();
        target = client.target("http://coffee-processor:8080/processes");
    }

   @Transactional(Transactional.TxType.REQUIRES_NEW)
    public void processOrder(Order order) {
        OrderStatus status = retrieveOrderStatus(order);
        order.setStatus(status);
        entityManager.merge(order);
    }

    // ...

    private JsonObject sendRequest(final JsonObject requestBody) {
        Response response = target.request()
               .buildPost(Entity.json(requestBody))
                .invoke();

        // ...

        return response.readEntity(JsonObject.class);
    }

    // definitions omitted ...
}

This host name is resolved via the Kubernetes cluster DNS and this will direct traffic to one of the running processor instances. The instance that backs the coffee-processor service, however, will be a mock server, WireMock in our example. This substitution is transparent to our application.

The system test scenario not only connects against the application to invoke the desired business use case, but will also communicate with the mock server, on a separate admin interface, to control its response behavior and to verify whether the application invoked the mock in the correct way. It is the same idea as for class-level unit tests, usually realized by JUnit and Mockito.

 

 

External Services

This setup allows us to mock and control services that run inside of our container orchestration cluster. But what if the external service is outside of the cluster?

In general, we can create a Kubernetes service without selectors that points to an external IP, and rewrite our application to always use that service name which is resolved by the cluster. By doing so, we define a single point of responsibility, where the service will route to.

The following code snippet shows an external Kubernetes service and endpoints definition which routes coffee-shop-db to an external IP address 1.2.3.4:

kind: Service
apiVersion: v1
metadata:
  name: coffee-shop-db
spec:
  ports:
  - protocol: TCP
    port: 5432
---

kind: Endpoints
apiVersion: v1
metadata:
  name: coffee-shop-db
subsets:
  - addresses:
      - ip: 1.2.3.4
    ports:
      - port: 5432

Within different environments, the service might route to different database instances.

Service Meshes

Service meshes enable us to transparently support technical cross-cutting communication concerns to microservices. As of today, Istio is one of the most-used service mesh technology. It adds sidecar proxy containers, co-located with our application containers, which implement these additional concerns. The proxy containers also allow to purposely manipulate or slow down connection for resiliency testing purposes.

In a end-to-end test, we can introduce faulty or slow responses to verify whether our applications handles these faulty situations properly.

The following code snippet shows an Istio virtual service definition that annotates the route to coffee-processor with a 3 second delay for 50% and failures for 10% of the responses.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: coffee-processor
spec:
  hosts:
  - coffee-processor
  http:
  - route:
    - destination:
        host: coffee-processor
        subset: v1
    fault:
      delay:
        fixedDelay: 3s
        percent: 50
      abort:
        httpStatus: 500
        percent: 10
---

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: coffee-processor
spec:
  host: coffee-processor
  subsets:
  - name: v1
    labels:
      version: v1

Now, we can run additional tests and verify how our application reacts to these increased response times and failure situations.

Besides the possibility to inject faulty responses, service mesh technology also allows to add resiliency from the environment. Proxy containers can handle timeouts, implement circuit breakers and bulkheads without requiring the application to handle these concerns.

Conclusion

Container orchestration and service meshes improve the testability of microservice applications by extracting concerns from the application into the operational environment. Service abstractions implement discovery and allow us to transparently substitute services or to re-route. Service meshes not only allow more complex routing but also allow us to inject failures or slow responses in order to put our applications under pressure and verify their corresponding behavior.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Techrepublic

 

As of PHP 7.1, the php-mcrypt was deprecated. And as of PHP 7.2 it was completely removed. This is a problem, since a number of server software titles still depend upon this encryption tool. And because software like Nextcloud, ownCloud, and many more have yet to shift that dependency, you might find yourself unable to install without mcrypt on the system. What do you do? No matter how many times you run either apt-get install php-mcrypt or yum install php-mcrypt, it won’t work.

Fortunately, there’s a solution. Said solution falls onto the shoulders of the pecl command. PECL is the PHP Extension Community Library, which serves as a repository for PHP extensions. Through this repository, you can install mcrypt.

What is mcrypt?

The mcrypt extension is a replacement for the UNIX cryptcommand. These commands serve as a means to encrypt files on UNIX and Linux systems. The php-mcrypt extension serves as an interface between PHP and mcrypt.

Getting mcrypt installed

I’m going to walk you through the process of getting mcrypt installed on Ubuntu Server 16.04. It’s not challenging once you have the necessary dependencies added to your system. With mcrypt installed, you can continue with the installation of the software that depends upon this extension.

With that said, how do we install mcrypt? First, open up a terminal window and install the necessary dependencies with the commands:

sudo apt-get -y install gcc make autoconf libc-dev pkg-config
sudo apt-get -y install php7.2-dev
sudo apt-get -y install libmcrypt-dev

Once the dependencies have been installed, you can install mcrypt with the command:

sudo pecl install mcrypt-1.0.1

And there you go. Mcrypt is now installed. Go back to the process of installing whatever server software that depends upon this extension and you should be good to go.

Not gone, just moved

Don’t worry: mcrypt is not gone. It’s just been moved out of PHP and into PECL. But for those who have been installing via php-mcrypt for years, this makes for a pretty big shift. Now, instead of being able to install mcrypt with a single command, you have four to deal with. Even so, at least you still have mcrypt available. Eventually, however, I believe the mcrypt dependency will be migrated to another tool (such as OpenSSL).

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.