Credits : Techradar

 

If you don’t use a software updater, you might be missing out on important patches to some of the programs you use every day. Many programs automatically update themselves so you can be sure that you’re always using the most recent (and most secure) version, but this is’t the case for all software, and some updaters works in different ways.

For example, you may not be offered updates for programs you don’t use very often, and it can be difficult to remember to launch programs just to see if there’s anything new to download.

This is where a dedicated software updater can help you out. These handy utilities will scan your computer to determine what you have installed, and will then go online to see if there are new versions of any of your applications available. Some utilities will automatically install the updates for you, while others will simply let you know that there’s an update available. Either way, you’ll be able to ensure you’re running the very latest versions of all your favorite programs with very little effort.

1. Patch My PC Home Updater

The quickest, easiest way to update your software

Portable app
Automatic scans
One-click updates
Not the biggest database

Patch My PC has been around for a while now, and it has gained a large following – something you’ll understand once you try it out. This is a portable app, making it ideal for sticking on a USB drive and keeping friends’ and family’s computer updated, and it’s delightfully simple to use.

As soon as you launch the program, it will automatically scan your computer, determine which software you have installed, and quickly let you know which needs to be updated. The database of programs it supports is not completely exhaustive, but it’s pretty comprehensive.

If there are any out-of-date programs detected on your computer, you can start to update all of them with a single click – there’s no need to manually start each updater as each of the updates will be downloaded for you in turn. Many programs will update ‘silently’ without the need for any intervention, but for some you will be prompted to allow the update to continue. As an added bonus you can configure update checks on a scheduler so you don’t need to remember to run them manually. Great stuff!

2. Downloadcrew UpdateScanner

Automatically check for updates to hundreds of applications

Automatic and manual scans
Huge software database
Not the most intuitive

Drawing on its sizeable and growing database of software, the Downloadcrew UpdateScanner is able to check for updates across a huge number of titles. The program can be configured to start automatically with Windows and check for updates every time you start your computer, or you can schedule scan for a particular time of day. You can, of course, opt for a manual scan if you prefer.

While the program is undeniably powerful and very thorough when it comes to checking for updates, the way it works is not as smooth an intuitive as some of its rivals. The updater sits in your system tray and a pop up lets you know when something is available to download. Click the notification and the main program interface will appear, complete with links to endless program you may want to install.

Hiding at the top of the screen is a link to download the available updates, and clicking this takes you to the Downloadcrew website where you can download the newest versions of software manually.

3. SUMo

Can check for beta versions
Can exclude certain programs
Not the fastest
Automatic updates aren’t free

wrestling – not that we really imagined that you thought that! The name is short for Software Update Monitor, and it does very much what you would expect it to. There’s a slight problem, through: it does it a little slowly.

As you would hope, the program scans you hard drive for software so it knows what you have installed, and this process can be a little on the slow side.

SUMo will then let you know of any programs which need updating and you can manually select those you want to update and download the latest version from the SUMo website.

If you want the advantage of automatic updating, you’ll have to shell out for the Pro version of the tool. There are some nice touches such as being able to check for beta versions of software, and the option to choose to ignore (ie never check for) updates for certain programs. There’s also a secondary tool available, DUMo, that can be used to check for driver updates. A perfect companion.

  • SUMo
  • 4. OutdateFIGHTER

    Not the most comprehensive, but includes some handy extras

    Includes software uninstaller
    Finds fewer updates than rivals
    Contains ads

    In tests, OUTDATEfighter seems to be rather more limited than the competition. The utility found fewer updates than alternatives update tools did, and this raises the question of whether it is going to miss something important when it really matters.

    In addition to this potential problem, the program interface serves as an advertising billboard for other products by the same company. You’ll find toolbar buttons that link to information about utilities to speed up and protect your computer in a variety of ways.

    OUTDATEfighter can also be used to uninstall software you no longer need, as well as managing Windows Updates – it’s not really clear, however, why you would want to go down this route rather than simply using Windows’ own tools.

    Ultimately, your mileage may vary with OUTDATEfighter. You may be in luck and find that all of your installed software is supported and detect. It’s worth testing it to find out.

    • OutdateFIGHTER

    5. Glarysoft Software Update

    Great detection rates, but updates aren’t automatic

    Well designed interface
    Extensive database
    No automatic updates
    Extra bundled software

    Glarysoft has a glorious history of releasing outrageously useful utilities for Windows, so the hope is very much that Glarysoft Software Update makes the grade

    Advertisement

    The good news is that it does. This is a quality tool with a great, professional feel and a high update detection rate. For system administrators and home with multiple computers, there is a remote update option that lets you administer other computers from afar. A lovely idea.

    Sadly, as with many other update tools, the update process is a manual one – unless you are willing to pay for an upgrade to the Professional version, in which case it can be automated. A nice touch here is that you are given trial access to the Software Update Professional so you can get an idea of how it works and whether it is worth your money.

    A word of warning. Take care during the installation of the program that you do not unwittingly install the extra Malware Hunter tool that’s offered to you. You don’t need it.

    • Glarysoft Software Update
    • The best free software uninstaller

     

    This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Zdnet

 

For at least three years, hackers have abused a zero-day in one of the most popular jQuery plugins to plant web shells and take over vulnerable web servers, ZDNet has learned.

The vulnerability impacts the jQuery File Upload plugin authored by prodigious German developer Sebastian Tschan, most commonly known as Blueimp.

The plugin is the second most starred jQuery project on GitHub, after the jQuery framework itself. It is immensely popular, has been forked over 7,800 times, and has been integrated into hundreds, if not thousands, of other projects, such as CMSs, CRMs, Intranet solutions, WordPress plugins, Drupal add-ons, Joomla components, and so on.

A vulnerability in this plugin would be devastating, as it could open gaping security holes in a lot of platforms installed in a lot of sensitive places.

This worse case scenario is exactly what happened. Earlier this year, Larry Cashdollar, a security researcher for Akamai’s SIRT (Security Intelligence Response Team), has discovered a vulnerability in the plugin’s source code that handles file uploads to PHP servers.

Cashdollar says that attackers can abuse this vulnerability to upload malicious files on servers, such as backdoors and web shells.

The Akamai researcher says the vulnerability has been exploited in the wild. “I’ve seen stuff as far back as 2016,” the researcher told ZDNet in an interview.

The vulnerability was one of the worst kept secrets of the hacker scene and appears to have been actively exploited, even before 2016.

Cashdollar found several YouTube videos containing tutorials on how one could exploit the jQuery File Upload plugin vulnerability to take over servers. One of three YouTube videos Cashdollar shared with ZDNet is dated August 2015.

It is pretty clear from the videos that the vulnerability was widely known to hackers, even if it remained a mystery for the infosec community.

But steps are now being taken to address it. The vulnerability received the CVE-2018-9206 identifier earlier this month, a good starting point to get more people paying attention.

All jQuery File Upload versions before 9.22.1 are vulnerable. Since the vulnerability affected the code for handling file uploads for PHP apps, other server-side implementations should be considered safe.

Cashdollar reported the zero-day to Blueimp at the start of the month, who promptly looked into the report.

The developer’s investigation identified the true source of the vulnerability not in the plugin’s code, but in a change made in the Apache Web Server project dating back to 2010, which indirectly affected the plugin’s expected behavior on Apache servers.

The actual issue dates back to November 23, 2010, just five days before Blueimp launched the first version of his plugin. On that day, the Apache Foundation released version 2.3.9 of the Apache HTTPD server.

This version wasn’t anything out of the ordinary but it included one major change, at least in terms of security. Starting with this version, the Apache HTTPD server got an option that would allow server owners to ignore custom security settings made to individual folders via .htaccess files. This setting was made for security reasons, was enabled by default, and remained so for all subsequent Apache HTTPD server releases.

Blueimp’s jQuery File Upload plugin was coded to rely on a custom .htaccess file to impose security restrictions to its upload folder, without knowing that five days before, the Apache HTTPD team made a breaking change that undermined the plugin’s basic design.

“The internet relies on many security controls every day in order to keep our systems, data, and transactions safe and secure,” Cashdollar said in a report published today. “If one of these controls suddenly doesn’t exist it may put security at risk unknowingly to the users and software developers relying on them.”

Since notifying Blueimp about his discovery, Cashdollar has been spending his time investigating the reach of this vulnerability. The first thing he did was to look at all the GitHub forks that have sprouted from the original plugin.

“I did test 1000 out of the 7800 of the plugin’s forks from GitHub, and they all were exploitable,” Cashdollar told ZDNet. The code he’s been using for these tests is available on GitHub, along with a proof-of-concept for the actual flaw.

At this article’s publication, of all the projects derived from the original jQuery File Upload plugin, and which the researcher tested, only 36 were not vulnerable.

But there is still lots of work ahead, as many projects remain untested. The researcher has already notified US-CERT of this vulnerability and its possible impact. A next step, Cashdollar told ZDNet, is to reach out to GitHub for help in notifying all plugin fork project owners.

But looking into GitHub forks is only the first step. There are countless web applications where the plugin has been integrated. One example is Tajer, a WordPress plugin that Cashdollar identified as vulnerable. The plugin had very few downloads, and as of today, it has been taken off the official WordPress Plugins repository and is not available for download anymore.

Identifying all affected projects and stomping out this vulnerability will take years. As it’s been proven many times in the past, vulnerabilities tend to linger for a long time, especially vulnerabilities in plugins that have been deeply ingrained in more complex projects, such as CRMs, CMSs, blogging platforms, or enterprise solutions.

 

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

 

Credits : Ciol

 

This article presents a hypothesis on what the (not too far in the future) world of AI assisted Software Development will look like. In a line, it’ll read something like this; concepts governing software creation will stay the same, but the pipeline is going to look incredibly different. At almost every stage, AI will assist humans and make the process more efficient, effective and enjoyable.

Our hypothesis is supported by predictions that, the AI industry’s revenue will reach $1.2 trillion by the end of this year, up 70% from a year ago. Further, AI-derived business value is expected to reach $3.9 trillion by 2022. We have also factored in observation of three main themes over the last decade; compute power, data and sophisticated developer tools.

More Compute Power: Easy access to elastic compute power and public clouds have empowereddevelopers, enterprises and tool creators to quickly run heavier analysis workloads, through parallelization. According to IDC, cloud-based infrastructure spends will reach 60% of all IT infrastructure by 2020.

More Data: Improved processing power will see digital leaders investing in better collection and utilization of data – 90% of the world’s data was created last year but, utilization is at 1%. It’s slated to grow to 3% or 4% by 2020

Integration and Distribution of Systems: The integration of disconnected systems using APIs coupled with microservices pattern enables the distribution of previously monolithic systems. This leads to a powerful mix that leverages tools and processes (required for software development) composed of multiple systems, running in different places.

The software creation process consists of 3 phases. They can be further split into 9 different task categories. Interestingly, only some of these categories have seen more investment in AI powered tooling than others. In the course of this article, let’s discuss some of the instances where AI will assist technologists in software development by taking over data analysis and prediction capabilities. Such an evolution will permit technologists to have more time to focus on judgement and creativity related tasks that machines can’t take on.

There is an increasing presence for what we call Intelligent Development Tools. We believe this turn of events is because of the 3 themes, and the growing clout of developers, that have caused dozens of startups to offer developer-focused services such as automated refactoring, testing and code generation. The evolution of these tools can be compartmentalized into 3 levels of sophistication.

The Levels of Sophistication

The first focused on the automation of manual tasks that increased reliability and efficiency of software creation. For example, the test automation reduced cycle time through parallelizing which shortened feedback loops. The deployment automation improved reliability using repeatable scripts. However, it’s still been humans who analyzed and acted on the feedback.

The next level of sophistication covered tools that permitted machines to take decisions based on fixed rules. Auto-scaling infrastructure is a good example of this. Machines could now determine the required compute power to service loads being handled by an application, while humans configured the bounds and steps that the compute power could scale.

The final level of sophistication will enable machines to evolve without human intervention – analyzing data and learning from it, will empower tools to mutate or augment rules that allow them to take increasingly complex decisions. We wanted to share a few ideas of how AI can augment the software development cycle.

The Software Development Cycle

One of the most common approaches to building AI use cases is leveraging the neural network; a computer system modelled on the human brain and nervous system. The popular approach involves developing a single algorithm that encompasses the intermediate processing steps of multiple neural net layers, leading to direct output from the input data. This process is successful and provides very good results when large samples of labelled data is available. The challenge with this method is that the internal processing of learning is not clearly explainable and sometimes gets difficult to troubleshoot for accuracy.

Ideation – Analysis of usage data to find anomalies/unexpected behaviour.

Prototyping – Low / no-code tools to create clickable prototypes from hand-drawn sketches.

Validation – Leverage past usage data to test new designs/ideas.

Development – Automated code generation and refactoring.

Requirements Breakdown – Generation of positive and negative acceptance criteria based on past requirements.

Testing – Automating test creation and maintenance.

Deploy – Ensure zero impact deployments by predicting right time to deploy and rate of the roll-out.

Monitoring – Use Telemetry Data to predict hardware/system failure.

Maintenance – Automate identification and removal of unused features.

One of the most common approaches to building AI use cases is leveraging the neural network; a computer system modelled on the human brain and nervous system. The popular approach involves developing a single algorithm that encompasses the intermediate processing steps of multiple neural net layers, leading to a direct output from the input data. This process is successful and provides very good results when large samples of labelled data is available. The challenge with this method is that the internal processing of learning is not clearly explainable and sometimes gets difficult to troubleshoot for accuracy.

AI Assistance

Ideation Augmented: Take the example of an e-commerce website. Here, people analyze data to find where users drop-off during an ordering funnel and come up with ideas to improve conversion. In the future, we could have machines that blend usage analytics with performance data to derive if slow transactions are the cause for drop-offs. Additionally, these machines could also identify faulty code that when fixed, will improve performance.

Testing Augmented: Writing tests for legacy systems, even with documentation, is very hard. Automated test creation tools that leverage AI to map out the application’s functionality, using usage and code analytics, allow teams to quickly build a safety net around such legacy systems. This allows technologists to make changes without breaking existing functionality.

Maintenance Augmented: A large part of maintenance-related costs, today, are spent on managing redundant features. Identification of these redundancies is a complex error-prone process because people have to correlate data with multiple sources. Allowing AI tools to take up this role of connecting and referencing data across sources will automate marking of unessential features and associated code.

Given the nature of evolution in the dynamic software development world, here’s our recommendation for how to prepare and focus efforts –

1. Recognize and leverage elastic infrastructure which ensures the ability to add and remove resources ‘on the go’ to handle the load variation

2. Equip your teams to strategically collect and process data, an invaluable asset whose volume will only increase given the prevalence of emerging tech like voice, gesture etc.

Include a stream within investment strategies that grow AI assisted software creation – rule based intelligent tools and self-learning tools.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Searcherp.techtarget

 

Information security risks in supply chain software are becoming increasingly prevalent, particularly as global companies have become more dependent on third-party vendors.

According to Symantec, more and more attackers are injecting malware into the supply chain to infiltrate organizations. In fact, there was a 200% increase in these attacks in 2017 — one every month compared to four attacks annually in previous years.

Supply chain software offers a new arena to threat actors intent on penetrating enterprise networks, said Peter Nilsson, vice president of strategic initiatives at MP Objects, a provider of supply chain orchestration software in Boston.

“Previously, people had their ERPs behind their very tight firewalls, and no one from the outside could get in without being monitored by the hawk eyes of the IT department,” he said. “Now, enterprises are saying, ‘We need to collaborate with our partners and we have to open up our ERP and let them in.'”

But if those third parties don’t have adequate security, attackers can infiltrate their systems to attack the enterprise.

Any time an enterprise introduces software into the mix of its supply chain, it runs the risk of cybersecurity issues, said Justin Bateh, supply chain expert and professor of business at Florida State College in Jacksonville, Fla. Most risks are caused by not having the proper controls in place for third-party vendors.

“There are many low-tier suppliers that will have weak information security practices, and not having clean and limited guidelines for these providers about security expectations will pose a significant threat,” he said.

Causes of potential security risks

Poor internal security procedures and a lack of compliance protocols can also introduce potential threats, including marketing campaign schemes, privacy breaches and disruption of service attacks, according to Bateh.

In addition, smaller companies may use inadequate software coding practices. As such, larger enterprises can’t be sure the software is being checked for quality as it goes through its development cycle, said Lisa Love, owner and president of LSquared, an information security consulting firm in Greenwood Village, Colo.

Consequently, something as unintentional as bad scripting can introduce vulnerabilities into the providers’ supply chain software, as well as into the enterprise, which attackers could then exploit, she said.

Jason Rhoades, a principal at Schellman & Co., a provider of attestation and compliance services in Tampa, Fla., agreed that in recent years the enterprise’s attack surface has increased along with the tremendous growth in the supply chain.

“Looking at the recent Equifax breach confirms that vendor and supply chain software poses a true security risk that the enterprise cannot ignore,” he said.

Equifax blamed its 2017 breach on a flaw in the third-party software it was using. And the massive breach of Target’s systems in 2013 was caused by attackers who stole the login credentials of its HVAC contractor and used them to infiltrate Target’s network.

Jonathan Wilson, a partner at the law firm Taylor English Duma LLP in Atlanta, agreed that many security risks come from the data connections and handoffs in the supply chain moving from smaller to larger providers.

“A lot of these small companies and startups don’t have robust data security systems,” said Wilson, who has represented a Fortune 500 international supply chain logistics provider. “They get a breach or some sort of exploitation is involved, and by working their way up the chain, the attacker can utilize the permissions that the smaller vendors get to obtain access to the larger company’s system.”

Another way hackers could introduce risk into an enterprise is via the supply chain software itself, according to Michael O’Malley, vice president of strategy at Radware, a provider of cybersecurity services in Mahwah, N.J. Most supply chain applications have some type of web interface with a login page to ensure that only the right people are authenticated and allowed to access the application.

Attackers can also use credential stuffing to infiltrate an enterprise via an unprotected web interface, he said. The attackers can hack into the interface, enter a legitimate username and password, and pose as someone else.

“Or they do something else offline through a phishing email scam to get users of the software to click on a link or respond to an email and dupe them into sharing their credentials,” O’Malley said. “They can then use those credentials to log in or break into the application.”

Another way attackers can penetrate an enterprise’s network via the supply chain is from the inside, according to O’Malley. This is where IoT devices come into play. More and more of these supply chain software applications — particularly in high-tech manufacturing — are part of an IoT network that provides different diagnostics and information about the machines on a factory floor.

These devices are providing all this real-time input back to the supply chain managementsoftware application. However, they can be easily compromised because they tend to be very inexpensive Linux-based devices that weren’t designed with security in mind, and they don’t have the necessary protections against hacking, he said.

“What we commonly see is that within minutes of these devices being connected to the internet, someone infiltrates them and puts a piece of malware or a bad bit of code on them,” O’Malley said. “And those are then used later as an attack on something else or in an attack on the software application itself.”

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

 

Credits : Datanami

 

Application development has become a cloud-focused initiative in many enterprises. Consequently, the languages, tools, and platforms needed to support today’s development initiatives are rapidly evolving.

Application development is also a discipline in which data science is assuming a greater role. To support the growing range of development projects with artificial intelligence (AI) at their core, enterprises are having to continually transform DevOps workflows to support continual building, training, and iteration of deep learning, machine learning, and other statistical models for deployment into production cloud environments.

As we look ahead to 2019, we expect to see the following dominant trends in enterprise application development:

  • Open development ecosystems will be at the heart of every tool vendor’s go-to-market strategy: Practically every vendor—large and small, established and startup—has pinned its future on its participating in the open-source community. Some have taken that open commitment even further in 2018. In the year gone by, Microsoft took on a special status in the open-source world with its acquisition of GitHub, the foremost DevOps platform in the open-source ecosystem. In 2019, Microsoft will continue to abide by its express commitment to allow
  • GitHub to operate in vendor-agnostic fashion in support of any language, license, tool, platform, or cloud that developers wish to use. In addition, Microsoft is likely to open-source more of its software projects and to refrain from asserting IP claims on a wider range of its IP patents, consistent with its recent joining of the Open Invention Network. The vendor is likely to assume a more proactive role in the open-source community as an evangelist for the new era of post-proprietary software development.
  • Serverless will dominate new cloud-native application development: Cloud application developers flocked to functional programming, also known as serverless, in a big way in 2018. This trend shows no signs of slowing down, as evidenced by the growing range of serverless tools, interfaces, projects, and other initiatives that have come to market this year. It’s also evident in the eagerness with which developers are adopting these offerings. In 2019, we’re likely to see the open-source Knative serverless project implemented by many vendors beyond its core developers Google, Pivotal, IBM, Red Hat and SAP, with Microsoft, AWS, and Oracle likely to come on board during the year. In addition, it’s very likely that Knative will be submitted to CNCF for development and governance under its growing cloud-native stack.

Developers will build hybrid serverless and containerized cloud applications: Hybrid clouds are becoming common in many enterprise IT strategies. At the application level, more developers are building hybridized cloud applications that incorporate data, workloads, and other resources that span public and private clouds. In 2019, we’ll see more development tools that enable hybridization of heterogeneous containerization and serverless environments.

  • Adoption of the emerging Knative project will accelerate the creation of hybridized serverless applications that run over federated Kubernetes multiclouds.
  • Transactional applications will shift toward the cloud’s edges: Conversational commerce, Alexa style, is the harbinger of the more pervasive edge-commerce future that awaits us all. In 2019, developers will increasingly build transactional applications that are designed to operate over and entirely distributed IoT, edge, mesh, and other cloud fabrics. To support these radically decentralized environments, more enterprises will use blockchains and smart contracts to provide immutable logs, enable edge-to-edge transactional integrity, and ensure full transparency and accountability. However, it will take still 2-3 years, at least, for all necessary technological, commercial, regulatory, and other standard practices to coalesce into a new edge-based transactional backplane for any-to-any e-commerce.
  • Data-science workbenches will adopt standardized cloud-native DevOps: AI is the heart of modern applications. Developing AI applications for the cloud increasingly requires the building of containerized microservices that are orchestrated within and across Kubernetes clusters over DevOps workflows. In the past year, the AI community has developed an open-source project called Kubeflow that provides a framework-agnostic pipeline for making AI
  • microservices production-ready across multi-framework, multi-cloud computing environments. Early adopters of Kubeflow include Agile Stacks, Alibaba Cloud, Amazon Web Services, Google, H20.ai, IBM, NVIDIA, and Weaveworks. In 2019, we’ll see the project mature and be implemented more broadly in commercial AI DevOps toolchain solutions. In this way, more enterprise app-development teams will be able to align their DevOps processes across teams working on AI and other cloud-native development projects.
  • Python, Kotlin, and Rust will become core languages for building new applications: Mobile application developers will continue to rely on JavaScript, Java, Objective-C, and PHP. In 2019, other languages will grow in importance in developer toolkits to address the requirements of many hot new applications. Most importantly, Python has become the go-to language for AI, Internet of Things (IoT), Web, mobile, and gaming apps, owing to the fact that it’s easy to learn and use on practically any platform. Kotlin’s superior flexibility may enable it to replace Java at some point in the standard Android developer’s repertoire, while Swift’s compact, clear syntax is building momentum among iOS developers. Rust’s support for memory-safe concurrency gives it a leg up on other languages for IoT, embedded, and other applications that require always-on 24×7 robustness.
  • Client-side AI frameworks will transform Web application development: JavaScript frameworks such as React are the heart of rich application development for Web, mobile, and other client-side edge application platforms. In 2019, more developers will build edge applications in JavaScript frameworks that enable richly interactive browser-based experiences, platform-native performance parity, and AI-powered client-side intelligence. GPU-accelerated client-side AI will become the heart of edge applications, as adoption of such open-source frameworks as js,Brain.js, and TensorFire continues to grow.
  • Advances in GPUs will stimulate innovation in immersive applications: Users are adopting augmented, mixed, and virtual reality applications in a wider range of industrial, business, scientific, and consumer uses. Gaming, in particular, has been a huge growth area for these immersive applications, owing in part to the availability of high-performance, low-cost GPUs on more client platforms. In 2019, we’ll see this trend accelerate as the new Nvidia Turing GPUs, with their lightning-fast real-time raytracing, come to market in support of next-generation immersive apps that combine photorealistic visuals with AI-driven contextual intelligence. Developers will build a new generation of GPU-aware smart camera applications that leverage the client-side AI frameworks, such as TensorFlow.js, to support fluidly continuous immersive visuals even in disconnected and intermittently connected usage scenarios.
  • Robotic process automation will become a principal development platform for AI-driven apps: Robotic process automation has been one of chief growth sectors in the software market over the past year. As an enabler for developing automation apps that emulate how people carry out myriad tasks, RPA has become a principal use case for AI in the workplace. Though traditionally used in RPA to infer application logic from artifacts that are externally accessible, AI’s role has expanded to enable creation of intelligent bots for business process automation. In 2019, we’ll see a growing role for AI in RPA to enable development of bots that can be orchestrated as microservices across Kubernetes environments. Through the adoption of cloud-native interfaces, RPA vendors will be able to address more IoT, edge, and multicloud opportunities.
  • AI-augmented programming tools will make developers more productive: Software developers have long used automated code generation tools to lighten the load. Augmented programming refers to the next-generation of “no code,” “low code,” and other approaches for automating coding and other development tasks. In 2019, we expect to see more of these tools incorporate abstraction layers that allow

    developers to write declarative business logic that is then translated by tools into procedural programming code. In addition, more augmented programming tools will incorporate AI to generate code, by means of machine learning algorithms that have been trained on human-developed codebases maintained in GitHub and other repositories. More of these AI-augmented programming tools will rely on embedded graph models and leverage reinforcement learning to compile declarative specifications into code modules that are automatically built, trained, and refined to achieve the intended programming outcomes.

  • Conversational user interfaces will grow less chatty but more useful: Chatbots have been a growing focus for application developers over the past several years. They’ve entered the consumer IoT and mobility arenas through Amazon Alexa, Google Assistant, and similar voice-activated appliance initiatives, while also finding their way into bot-powered text chat features in more enterprise applications. In 2019, we’ll see developers tap into sophisticated AI-powered digital assistant platforms such as Google Duplex to enable chatbots to automate more tasks predictively, thereby becoming paradoxically less chatty but more productive.

  • Digital wellness will become a key mobile-app usability criterion: Users’ growing dependency on devices is undeniable, and it’s beginning to impact how developers approach building mobile applications. Though no one seriously believes that the average user will rely on their devices any less in the future, there is a growing repertoire of mobile application features—such as predictive automation of routine tasks and context-adaptive suppression of distracting notifications–that can help users unglue their frantic eyeballs from their smartphones now and then. Google’s emphasis on “digital wellness” features in its new Android 9 Pie operating system signals that we’ve entered a new era in mobile application development. In 2019, mobile application developers will leverage the predictive, adaptive, contextual and other usability features in this and other mobile platforms to help users stay sane, focused, and productive amid the growing glut of mobile devices in their lives.

 

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Phoronix

 

While Windows users last week were greeted by the Radeon Software Adrenalin 2019 driver on the Linux side was the Radeon Software for Linux 18.50 release. The only listed public change for this 18.50 Linux hybrid driver build was RHEL 7.6 support, but I’ve since been able to test and confirm that the Radeon RX 590 is working with this new Linux driver package. As a result, here is a look at the Radeon RX 590 performance from this “AMDGPU-PRO” driver build compared to the latest open-source driver stack in the form of Linux 4.20 with Mesa 19.0-devel.

This article is offering an initial look at how the Radeon RX 590 graphics card performs between these two different AMD Linux graphics driver options. Radeon Software for Linux 18.50 is the first release with this RX 590 support due to the few AMDGPU kernel patches needed for getting this newest Polaris variant working out on Linux. Those RX 590 AMDGPU patches are in the process of landing for the Linux 4.20 mainline kernel.

When benchmarking the “PRO” 18.50 OpenGL/Vulkan driver components to the fully open-source alternative, Mesa 19.0-devel was used by the Padoka PPA from this Xubuntu 18.04 test box. No hardware changes were made between the different test driver configurations.

Using the Phoronix Test Suite, a variety of OpenGL and Vulkan Linux gaming benchmarks were carried out with the Sapphire Radeon RX 590 on both of the drivers.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Windpowerengineering

 

AnalySwift, a provider of efficient high-fidelity modeling software for composites and other advanced materials, announced the launch of its Academic Partner Program, through which it will offer universities no-cost licenses for academic research.

“We have always been close to the academic community, where both the SwiftComp and VABS software programs originated,” said Allan Wood, President & CEO of AnalySwift. “Our Academic Partner Program honors that tradition and broadens university access to cutting-edge simulation tools.”

Academic licenses of VABS and SwiftComp have always been available to universities for purchase, but the new program offers the licenses at no cost.

“Engineering faculty and students can benefit greatly from the full versions of the programs,” said Dr. Wenbin Yu, CTO of AnalySwift. “These are tools being used in industry to model complex, real composites including wind turbine and helicopter rotor blades, deployable space structures made from high-strain composites (HSC).”

The composite simulation programs are typically used in aerospace and mechanical engineering programs, such as for wind-turbine blades, with emerging applications in other areas.

“Since 2014, VABS has become our method of choice for rotor-blade structural design and optimization at our institute,” explained PhD student Tobias Pflumm at the Technical University of Munich. “With its help, we have successfully designed, tested and manufactured the rotor blades of our Autonomous Rotorcraft for Extreme Altitudes or AREA. We are currently using VABS extensively within a multi-disciplinary design environment to quantify uncertainties in the rotor blade design process.”

Inaugural members of the Academic Partner Program include the University of British Columbia (Composites Research Network), Technical University of Munich (Institute of Helicopter Technology), and Carleton University (Rotorcraft Research Group).

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Laravel-news

 

Yesterday the PHP team released PHP 7.3.0 for general availability (GA) and marked the third feature update to PHP 7. You can download the latest version from the official PHP downloads page. You can also get all the nitty-gritty details about PHP 7.3 by reading the PHP 7 changelog on the official site.

While today marks the day of the stable release, you will have to wait a bit longer for the migration guide, which should be available shortly.

If you haven’t read much on PHP 7.3 yet, here are the highlight features coming to PHP 7.3:

  • Trailing Commas in function calls
  • JSON_THROW_ON_ERROR flag for json_encode() and json_decode()
  • Flexible Heredoc and Nowdoc syntax
  • An is_countable() function
  • list() reference assignment

Besides the flagship PHP 7.3 announcement, December 6th included five total PHP releases.

Among the five releases, PHP 5.6.39 and PHP 7.0.33 are both security releases considered to be the last release in their respective branches. You should view these versions as the final releases unless an unforeseen security issue warrants another release.

 

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

credits : Theserverside

 

The most popular articles on TheServerSide tend to be the highly technical ones that hit directly at the heart of the enterprise Java developer. While we report important news, discuss interesting trends and cover key conferences, the most-read articles tend to be the ones that deal directly with development. For 2018, our most popular articles were Java developer tutorials on new APIs, frameworks such as Spring or the tools and techniques server-side developers use to move code into production.

The most popular articles on TheServerSide tend to be the highly technical ones that hit directly at the heart of the enterprise Java developer. While we report important news, discuss interesting trends and cover key conferences, the most-read articles tend to be the ones that deal directly with development. For 2018, our most popular articles were Java developer tutorials on new APIs, frameworks such as Spring or the tools and techniques server-side developers use to move code into production.

Here is some of our best and essential coverage for developers on the front lines.

RESTful APIs

Integration is always a challenge on the back end. So it’s no wonder that readers have the development of RESTful APIs with both Spring and Java EE on their minds. These Java developer tutorials were particularly popular:

  • Step-by-step Spring Boot RESTful web services tutorial with SpringSource Tool Suite.
  • Step-by-step RESTful web service tutorial with Eclipse.

Web-centric applications

This article on using Spring boot to develop web-centric applications was a surprise hit:

  • Spring MVC tutorial: How Spring Boot web MVC makes Java app development easy.

Git and GitHub integrations

Of course, our readers are interested in more than just development APIs. They want to know about the tools that make developers productive and how to use them effectively. Git topped the list of tools enterprise developers are learning to master. These Java developer tutorials on Git and GitHub integration were extremely popular:

  • Five basic Git commands developers must master.
  • What is the difference between GitHub and Git?
  • Need to undo previous local commits? Just git reset and push.

CI/CD tools for DevOps

DevOps is also a topic that is picking up steam, particularly ways to put DevOps tools to use. TheServerSide in 2018 offered these continuous integration and continuous delivery tutorials:

  • Jenkins CI tutorial for beginners.
  • Jenkins interview questions for DevOps engineers.
  • Tips and tricks for the Jenkins Git Plugin.

Maven, the jack of all builds

These articles showed that Maven, the Swiss army knife of Java development tools, remained a popular topic:

  • Jenkins vs. Maven: Compare these build and integration tools.
  • Why you need to master Maven’s fundamental concepts.
  • How to install Maven and build apps with the mvn command line.

TheServerSide will continue to cover a variety of topics that touch all areas of software development, from ensuring code quality to how to best embark upon a DevOps transition. As always, the focus will be on what empowers our readers to be better and more productive developers.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Zdnet

 

The rise of collaborative robots (cobots) is bringing robots and humans closer together than ever before, but robots still lack some important social graces. Researchers at Yale University have developed a robotic system that helps robots be more polite (and more importantly, more useful). They are teaching robots to respect ownership of objects.

“As robots begin to be used in our homes, schools, and workplaces, it is important that they be able to understand the social conventions that we use every day,” researcher Brian Scassellati explains.

Scassellati, along with Xuan Tan and Jake Brawer, developed a system to help robots distinguish between tools that they own and tools that other people or robots own (or are temporarily using).

Scassellati says, “I want my robot at home to understand that it is allowed to clear the dishes from the table when we have finished eating, but not before. I want my robot at work to know that it can borrow the screwdriver that I’m not using, but that it cannot borrow my coffee cup. Knowing how to work side-by-side with people is a skill that many robots will need.”

To accomplish this, the researchers combined two different kinds of machine learning representation: one that uses explicit rules, and another that uses experiences to predict an object’s likely owner.

They used a technique called Bayesian inference, which Scassellati explains is a statistical technique that the robot uses to keep track of how certain it is about a particular fact or idea in such a way that it can update that certainty as more information becomes available.

The research is pre-published on arXiv. According to the paper, “Ownership is represented as a graph of probabilistic relations between objects and their owners, along with a database of predicate-based norms that constrain the actions permissible on owned objects.”

The researchers used Baxter robot (from now defunct Rethink Robotics) to demonstrate how their software system works, but the system itself could be used on other robots. Through both simulated and real-world experiments, they demonstrated that robots could use their system to complete tasks while respecting ownership rules.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.