Credits : Searchsecurity.techtarget

Software developers have a lot to contend with when it comes to keeping their skill levels current. New features are constantly introduced to integrated development platforms, and programming languages evolve frequently, as do development methodologies, like Waterfall. On top of these moving parts, developers have to deliver projects on time and in budget, follow security coding best practices and remain compliant with industry regulations.

Many developers have adopted DevOps, the principle of integrating development and IT operations under a single automated umbrella, which streamlines frequent feature releases and increases application stabilityHowever, it can be difficult for security and compliance monitoring tools to keep up with this pace of change, as they weren’t built to test code at the speed DevOps requires. Security is largely regarded as the main obstacle to rapid application development, and as a result, application security in DevOps has suffered.

In the long term, applications with security built in from the beginning are far more likely to resist attack and avoid potentially devastating interruptions to daily operations. This is why the DevSecOps application security movement is so important — and why developers need to understand its relevance to their work and how it can improve their output.

Ryan O’Leary, former chief security research officer at WhiteHat Security said: “Our average customer takes 174 days to fix a vulnerability found when using dynamic analysis in production. However, our customers that have implemented DevSecOps do it in just 92 days.” Likewise, he added: “If we look at vulnerabilities found in development using static analysis, an average company takes 113 days, while the DevSecOps companies take just 51 days.”

The concept of shifting security left assumes everyone is responsible for security, in contrast to incident response, where the security team is called in at the end. Adding more automation from the start reduces the chance of misadministration and mistakes. Automated security functions, such as identity and access management, firewalling and vulnerability scanning, can be enabled throughout the DevOps lifecycle.

Security and risk management leaders need to adhere to the collaborative nature of DevOps to create a seamless and transparent development process. To do this, developers need to ensure security is included in all decisions and lifecycle processes.

This is very much a two-way street. Software developers need to fully understand current and proposed security processes and lifecycles. For example, where are the shortcomings in regard to adding security in?

Likewise, consider what is already in place to ensure application security in DevOps, as well as tools and skills that will need to be used and learned. For example, do developers have tools to check for vulnerabilities during the local build process, or do they do this in the continuous integration/continuous delivery process? Also, do company development processes mandate a security element check the code is secure at each build process?

Most organizations have clear governance of risk, and derived security policies are a product of that. Newer ways of thinking, such as making a transition to DevSecOps, require implementing processes to bake those security policies into DevOps processes. In order to produce positive results when baking application security in DevOps, organizations should combine development, security and operations teams, shorten feedback loops, reduce incidents, and define and emphasize shared security responsibilities.

The time it takes to fix a vulnerability is a good measure of whether your DevSecOps application security program is effective enough. This time frame reflects security team agility, as well as how they handle and prioritize issues that come up.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Firstpost

The Indian Ministry of Electronics and Information Technology had earlier revealed India’s first home-made processor called as Shakti. The chipset has been in the works for since 2016 and now Indian Institute of Technology Madras has released an SDK (software development kit) for the processor.

IIT Madras’ RISE group has been responsible for the development Shakti and they have said that they plan on releasing six classes of the processor in the market. These six stages of the processor include E-Class, C-Class, I-Class, M-Class, S-Class, and H-Class. They can be used in a wide variety of devices  IoT, robotic platforms, motor controls and more.

The C-class of processors is a 2-bit 5 stage in-order microcontroller-class of processors and has a clock speed of 0.2-1 GHz clock speeds. The I-class is a 64-bit processor with multi-thread support and clock speeds ranging from 1.5 to 2.5 GHz. M-class processor can support up to 8 cores and has the same clock speed.

S-class variant of Shakti are aimed at server-type workloads and it happens to be an enhanced version of the I-class processor with the same multi-thread support.  The H class processor is for the high-performance computing and analytics workloads. Apart from that, RISE is working on T-class and F-class processors as well.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Cloudcomputing-news

The software as a service model has been widely embraced as digital transformation becomes the norm. But with it comes the risk of network outages. IDC has estimated that for the Fortune 1000, the average total cost of unplanned application downtime per year can range from $1.25 to $2.25 billion. This risk arises primarily from the rapid iteration of the DevOps methodology and the subsequent testing shortfalls.

To protect against certain errors and bugs in software, a new and streamlined approach to software testing is in order.

The DevOps/downtime connection

Development and testing cycles are much different than they used to be due to adoption of DevOps methodology. To remain competitive, software developers must continually release new application features. They’re sometimes pushing out code updates as fast as they are writing them. This is a significant change from how software and dev teams traditionally operated. It used to be that teams could test for months, but these sped-up development cycles require testing in days or even hours. This shortened timeframe means that bugs and problems are sometimes pushed through without the testing required, potentially leading to network downtime.

Adding to these challenges, a variety of third-party components must be maintained in a way that balances two opposing forces: changes to a software component may introduce unexplained changes in the behavior of a network service, but failing to update components regularly can expose the software to flaws that could impact security or availability.

Testing shortcomings

It’s pricy to deal with rollbacks and downtime caused by bugs. It typically costs four to five times as much to fix a software bug after release as it does to fix it during the design process. The average cost of network downtime is around $5,600 per minute, according to Gartner analysts.

Financial losses are a problem, but there’s more to be lost here. There’s also the loss of productivity that occurs when your employees are unable to do their work because of an outage. There are the recovery costs of determining what caused the outage and then fixing it. And on top of all of that, there’s also the risk of brand damage wreaked by angry customers who expect your service to be up and working for them at all times. And why shouldn’t they be angry? You promised them a certain level of service, and this downtime has broken their trust.

And there’s another wrinkle. Software bugs cause issues when they are released, but they can also lead to security issues further down the road. These flaws can be exploited later, particularly if they weren’t detected early on. The massive Equifax breach, in which the credentials of more than 140 million Americans were compromised,  and the Heartbleed bugare just two examples. In the case of the Heartbleed bug, a vulnerability in the OpenSSL library caused significant potential for exploitation by bad actors.

Developers make changes to the code that trigger a pipeline of automated tests in this environment of continuous integration and delivery. The code then gets approved and pushed into production. A staged rollout begins, which allows new changes to be pushed out quickly. But this also relies heavily on the automated test infrastructure.

This is hazardous, since automated tests are looking for specific issues, but they can’t know everything that could possibly go wrong. So then, things go wrong in production. The recent Microsoft Azure outage and Cloudflare’s Cloudbleed vulnerability are examples of how this process can go astray and lead to availability and security consequences.

A new way to test

A solution to the shortcomings of current testing methods would find potential bugs and security concerns prior to release, with speed and precision and without the need to roll back or stage. By simultaneously running live user traffic against the current software version and the proposed upgrade, users would see only the results generated by the current production software unaffected by any flaws in the proposed upgrade. Meanwhile, administrators would be able to see how the old and new configurations respond to actual usage.

This would allow teams to keep costs down, while also ensuring both quality and security, and the ability to meet delivery deadlines – which ultimately helps boost return-on-investment. For the development community, building and migrating application stacks to container and virtual environments would become more transparent during development and more secure and available in production when testing and phasing in new software.

Working with production traffic to test software updates lets teams verify upgrades and patches in a real-world scenario. They are able to quickly report on differences in software versions, including content, metadata and application behavior and performance. It becomes possible to investigate and debug issues faster using packet capture and logging. Upgrades of commercial software are easier because risk is reduced.

Toward quality releases

Application downtime is expensive, and it’s all the more painful when it’s discovered that the source is an unforeseen bug or security vulnerability. Testing software updates in production overcomes this issue by finding issues as versions are compared side by side. This method will save development teams time, headaches and rework while enabling the release of a quality product.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Techcrunch

Sourced, or source{d}, as the company styles its name, provides developers and IT departments with deeper analytics into their software development life cycle. It analyzes codebases, offers data about which APIs are being used and provides general information about developer productivity and other metrics. Today, Sourced is officially launching its Enterprise Edition, which gives IT departments and executives a number of advanced tools for managing their software portfolios and the processes they use to create them.

“Sourced enables large engineering organizations to better monitor, measure and manage their IT initiatives by providing a platform that empowers IT leaders with actionable data,” said the company’s CEO Eiso Kant. “The release of Sourced Enterprise is a major milestone towards proper engineering observability of the entire software development life cycle in enterprises.”

Because it’s one of the hallmarks of every good enterprise tools, it’s no surprise that Sourced Enterprise also offers features like role-based access control and other security features, as well as dedicated support and SLAs. IT departments also can run the service on-premise, or use it as a SaaS product.

The company also tells me that the enterprise version can handle larger codebases so that even complex queries over a large data set only takes a few seconds (or minutes if it’s a really large codebase). To create these complex queries, the enterprise edition includes a number of add-ons to allow users to create these advanced queries. “These are available upon request and tailored to help enterprises overcome specific challenges that often rely on machine learning capabilities, such as identity matching or code duplication analysis,” the company says.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Darkreading

Developers in enterprise environments — and at commercial software companies, for that matter — have learned that to deliver features swiftly, it’s much more expedient not to reinvent the wheel with certain chunks of code. And so they increasingly build their software by mixing and matching open source software components within their code base to minimize their development time to coding the components that truly add value and differentiation to their applications.

This reliance on open source components greatly speeds up innovation but often comes at a high price: Many of these components available for download contain dangerous vulnerabilities. Some companies are better than others in establishing policies about how and when developers can use them, as well as at actively managing the components to track for flaws. The latest research shows that those that do it well can minimize the risks introduced by these components into their software while maximizing the gains.

“For organizations who tame their software supply chains through better supplier choices, component selection, and use of automation, the rewards are impressive,” says Wayne Jackson, CEO of Sonatype, which last week released its “2019 State of the Software Supply Chain Report.” This study, along with two others released in the past two months, paint a good picture of open source component risks and how organizations are mitigating them.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Techrepublic


In-person interviews can be an intimidating experience, but are necessary for the hiring process. One of the biggest mistakes a candidate can make is coming into an interview unprepared. Candidates should do their research on the organization and put themselves in the best position to land the job. 


However, this task is often easier said than done, especially for roles that are more technically-based like software engineers. These professionals may find they are able to show off their technical skills, but not necessarily their soft skills during the interview process. 


Soft skills are becoming more important to hiring managers and staff at tech companies, according to a recent Udemy report. Not only should candidates assess their own soft skills, but they should also consider the personality and culture of the company they are interviewing with, said Rishon Blumberg, founder of 10x Ascend. 


Keeping the focus on software engineers, “[they] should take stock of what are the most important elements for them in a new job,” said Blumberg. “When you have a clear understanding of what you value most (culture, mission, work flexibility, compensation, equity, etc) a natural roadmap will start to unfold as you meet with different companies.”

Preparing for the interview questions can be difficult, but candidates often stand out—or redeem themselves—based on the questions they ask. The awkward “do you have any questions for us?” at the end of the interview is a great opportunity for candidates to rise to the top. 

“Coming in with prepared questions will lessen your stress when you get to the end of the interview and ensure that you leave the interviewer with a positive impression of your interest in the company and role,” wrote Brian Wong, Pathrise software engineering advisor, in his post on Pathrise Resources. “Questions that show your willingness to learn, drive to work hard, and excitement about the company and mission are all really helpful for the interviewers to gauge your fit as part of the culture.”

Here are some of the best questions software engineers can ask during an interview: 

1. How would you describe your organization’s company culture?

“You’ll spend approximately 40% of your life at work, so working in an environment with a culture that matches your own beliefs and desired lifestyle goals is vitally important,” said Blumberg. Candidates need to make sure they’ll enjoy working with the people they are around. 

2. What is your onboarding process like for developers, and how quickly will I be contributing to the code base? 

Software engineers should inquire about the onboarding process and the time it would take to really start the job, said Paul Wallenberg, senior unit manager of technology services at LaSalle Network. This question shows purpose and drive in the candidate. 

3. What are your current challenges on the team? 

Candidates should ask about the challenges current team members face, said Wong. Through this question, the candidate can learn how the organization handles hardships and conflict. 

4. What kind of projects will I be given?

“Many job seekers in the tech sector are mission driven and looking to work on challenging projects,” said Blumberg. “They want to solve unique problems and create products that will help humanity in some way. If this is important to you, ask about what types of things you’ll be building and working on. Will these projects be challenging enough to hold your interest for the duration of your tenure with the company?”

5. What type of ongoing educational opportunities does your organization support?

Software engineer candidates should ask the hiring manager if the company offers education courses, or support for education courses, noted Blumberg. This shows the candidate’s willingness to learn, and is also valuable for candidates who are interested in continuing school. 

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

credits : Automationworld

For more than a decade now, Siemens has been pursuing a growth strategy that combines acquisitions with organic R&D. With technologies from each in place, Siemens is now combining some of its software products into a single portfolio to provide users with easier access to many of its different software capabilities.

An example of this can be seen in Siemens’ launch of Opcenter, which integrates the company’s Manufacturing Operations Management (MOM) capabilities such as scheduling, manufacturing execution, manufacturing intelligence, and quality management into a single cloud-ready portfolio. These capabilities come from a combination of Siemens in-house products, as well as those acquired from companies like Camstar and Preactor.

According to Siemens,integrating these capabilities and making them available via the cloud makes Opcenter easier to deploy and minimizes the amount of user training typically required. It should also ease the typically lengthy software implementation process while giving companies greater access to their production lines.

Opcenter can be launched on-site, remotely, or both simultaneously. All software systems and applications can operate on a variety of smart devices.

“Siemens Opcenter is the next logical step, given our extensive technological innovation and MOM portfolio evolution,” said Rene Wolf, senior vice president, Manufacturing Operations Management Software at Siemens Digital Industries.

Along with the launch of Opcenter, Siemens announced a new version of its Manufacturing Execution System (MES) software. The newest features in this version are focused on smart device integration, mobility, and capabilities that enhance availability and data flow.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.


Credits : Auto.economictimes

Volkswagen has opened its new information technologydevelopment centre in Dresden that will work to create industrial cloud to connect all its production facilities.

Around 80 IT specialists will work in the facility to develop the Volkswagen industrial cloud that is being built in association with Amazon Web Services.

The industrial cloud will bring together data from Volkswagen’s all the factories and enable smart real-time control, simultaneously in different factories. The goal of this industrial cloud system is to amalgamate all data and digitalize the production and logistics.

The automakers is also putting 5,000 digital experts into a new unit called “Car.Software” by 2025, as the German car manufacturer aims to develop at least 60% of its software in-house by then, up from less than 10% currently.

Volkswagen also said all of its new vehicles would use the same software platform – consisting of its vehicle operating system known as “vw.os” and the Volkswagen Automotive Cloud – by 2025.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Forbes

The global tech talent pool has never looked more attractive, especially as the demand for software developers continues its dramatic rise. In fact, according to the U.S. Bureau of Labor Statistics,  “Employment of software developers is projected to grow 24% from 2016 to 2026, much faster than the average for all occupations.”

If you’re not technical yourself, it can be hard to tell when you’ve found a reliable software developer or a team worth contracting. So how do you approach hiring software engineers when you’re a nontechnical business owner?

As the CEO of a software design and development agency, I always give my potential clients and friends who are searching for outsourced development work the same essential tips I would give to a family member.

Step 1: Reach Out To A Tech-Savvy Friend For Support

Before you think about starting your search, reach out to your most technical friend, even if they’re happily employed. Maybe it’s the guy who was the go-to computer science whiz in high school or the woman who wrote the software at your previous job. Bottom line: Make it someone that you trust and who you feel comfortable enough saying something technically “dumb” to without worrying about their reaction.

Start by explaining to them what your software idea is and how you imagine it looking and working for a user. Then, ask them if they’ll help you write a brief technical description of your project and your needs — nothing fancy, and maybe a couple of paragraphs. They’ll likely know how to put together a quick technical summary so that another engineer will understand it and, this can make your first communication with an unfamiliar technical person that much easier.

Step 2: Locate Candidates

After you’ve nailed down a formalized version of your idea, ask your friend to help you locate five to ten freelancers or software development agencies whose skills are a good fit for your project. Platforms like Clutch, Upwork and GoodFirms are great places to find engineers and agencies.I’ve personally found it most effective to connect with at least five different freelancers or agencies that I know I’ll be able to work with.

Step 3: Set Up Interviews

With your list of potential candidates, you can start setting up the first round of interviews. If you like a specific freelancer or firm, then you may want to ask your technical friend to perform a second, more technical interview.

Pro tip: Offer your friend a $250 Amazon gift card or cash as a reward for helping vet your candidates, and make them take it. Their enormously helpful assistance will be of more value than you can imagine. Plus, that seemingly small investment upfront can often save you a ton of money, time, disappointment and stress down the line.

Step 4: Vet, Vet, Vet

This vetting process is where you can really rely on your trusted technical friend to ask the technical questions you may not understand the answers to. They can help you conduct an in-depth evaluation of a candidate’s technical expertise and experience working with similar clients. This may serve you much better than a simple cost analysis. 

When vetting, ask candidates important questions such as:

• Do you have similar projects to mine in your portfolio?

• How do you charge for your time and expertise?

• What happens if the scope of my project changes?

Also, have them describe how they work with nontechnical people to make sure you know what you’re getting.

Step 5: Talk To Their References

Ask your top three candidates to put you in touch with three business owners they worked with who had projects similar to yours or around the same size. Ask them how technical that business owner was and how they might compare to you and your project. If there are similarities, then when you speak to that business owner, you’ll likely be able to gauge how easy the freelancer or agency was to work with for someone with your level of technical skill.

Any hesitation or excuses on the part of the freelancer or agency to share that information may serve only as a minor red flag because let’s face it: Not all clients are the easiest to work with. A good freelancer or agency, however, can likely provide you with four to five references at the drop of a dime.

Don’t Make This Common Mistake

It’s common for a nontechnical person who’s so engaged and excited about their project to want to hit the ground running and focus their engineer search on cost. One of the biggest mistakes you can make, however, is comparing freelancers or agencies based solely on their fixed project price quote or what they charge per hour.

If you lack a deep technical background and haven’t sought help from a tech-savvy friend, you may not have provided enough details for a freelancer or an agency to adequately estimate your project’s cost. Your candidates may be missing a key understanding of what you’re looking to build. Comparing candidates based on estimates from incomplete project scopes and ultimately picking someone because they’re the cheapest often results in unique challenges. Inevitably, that path can end up costing you more in time, money and headaches.

When considering a freelancer or agency, it’s always important to check out any reviews you can find online and/or talk to their references. You also want to understand their project management style and how responsive and communicative they are at the start. Ask to interview the lead project manager and lead engineer who will be working on your project.

When in doubt, take a cue from the Beatles, and get by with a little help from your friends. Don’t have a super technical buddy? Well, considering there are over 4.5 million IT workers in the U.S., ask your social network. Ask the people you trust most to connect you with someone who can help you. A decision like this that could make or break your project — and ultimately, your business — is worth the wait and the small fee you pay a friend.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.


Credits : Forbes

Just what we need–yet another “framework” for improving software security.

We’ve already got the PCI DSS (Payment Card Industry Data Security Standard), the BSIMM (Building Security In Maturity Model), the OWASP(Open Web Application Security Project), the SAMM (Software Assurance Maturity Model), the ISO (International Organization for Standardization), the SAFECode (Software Assurance Forum for Excellence in Code—the list goes on.

And it’s about to go on some more. The framework in the works—a white paper draft at the moment—from the National Institute of Standards and Technology (NIST), is called SSDF, as in, “Mitigating the Risk of Software Vulnerabilities by Adopting a Secure Software Development Framework (SSDF).” It went public June 11 and the comment window is open through August 5.

The framework recommends 19 practices, organized into four groups:

  • Prepare the organization.
  • Protect the software.
  • Produce well-secured software.
  • Respond to vulnerability reports.

Following the practices, the paper says, “should help software producers reduce the number of vulnerabilities in released software, mitigate the potential impact of the exploitation of undetected or unaddressed vulnerabilities, and address the root causes of vulnerabilities to prevent future recurrences. Software consumers can reuse and adapt the practices in their software acquisition processes.”

Recommendations, not mandates

All good. A more-than-worthy goal. Who wouldn’t want to mitigate the risks of software vulnerabilities? It’s just that it sounds a bit like issuing a framework for controlling the speed of vehicles on public ways when there have been dozens of laws on the books for decades designed to do the same thing.

Beyond that, whatever the specifics of the final version of the framework, they will be recommendations, not mandates. NIST is a federal agency, under the Department of Commerce, but is not a regulatory agency and therefore has no leverage to force compliance with the framework.

And yet … perhaps it will fill a void. The goal of the proposed framework seems to be less about trying to reinvent the wheel and more about bringing various types of high-quality wheels together in one place so people who need wheels can decide what fits their needs.

Indeed, the practices refer heavily to the multiple frameworks listed above, indicating that this is a consolidation of existing best-practice recommendations.

As one of the co-authors, Murugiah Souppaya, of the computer security division of the Information Technology Laboratory (within NIST), put it, “The paper facilitates communications about secure software development practices among groups across different business sectors around the world by providing a common language that points back to the existing industry sectors specific practices.”

He added that this “common language” is meant to help them describe their current practices. “This will allow them to set their desired baseline and identify areas for improvement,” he said.

Of course, none of the existing frameworks has transformed software security so far. There are headlines daily about breaches enabled by vulnerabilities—sometimes rampant—in applications or devices controlled by software.

So even if this is the best one yet, if organizations aren’t persuaded to invest the time and money to follow the recommendations, it is unlikely to generate even incremental, never mind transformational, improvements in software security.

Taking the long view

What are its chances of breaking that precedent? Not high—at least not in the short term—in the view of Sammy Migues.

Migues, principal scientist at Synopsys and a co-author of the BSIMM, said this doesn’t mean the proposed framework has no potential value. “Yes, following it would help,” he said. “But who is going to follow it? Only those for whom it is mandated, and only if they’re audited.”

And the number of those is likely to be small indeed. Migues noted that NIST “doesn’t make law or set policy. It’s an innovation organization for cheerleading and awareness. So unless some other organization that has the imprimatur of authority holds people to it, it’s unlikely that it will be followed,” he said.

The marketplace—both public and private—could exert some leverage, he said, if entities putting jobs out to bid made a security framework like this one part of the RFP (request for proposals). “They could say, ‘This is one of the things you have to comply with to get the contract,’” he said.

But given the number of frameworks/standards already in existence, it is difficult to see how “one more arrow in the quiver” would be the one that suddenly disrupts the marketplace like that.

Part of the problem, he said, is that it’s hard to change people who are set in their ways, even in an industry that is evolving as rapidly as IT.

“There are a couple of things I really like [in the proposal]—one of them is to implement a supporting toolchain. They’ve actually put thought into it—they’re saying you can’t just download free stuff. And putting these in as actual tasks, along with who should think about them, is useful,” he said.

“But every development manager has his or her way of doing things. Do you think any of them are going to look at this and say, ‘Wow. I’ve been doing this for 20 years and have been doing it wrong all that time! I need to start doing it completely differently’? Not a chance.”

Incremental change

Still, if a framework like this makes its way into the education system, it could yield incremental changes that would become transformative over time, he said.

“It can’t come from a vendor, but from neutral third parties,” he said. “If it makes its way into textbooks, college courses and RFPs, then it might gain some traction with regulatory bodies.”

NIST is hopeful that the framework’s “high-level” approach will make it more palatable to an “audience” of organizations with vast diversity in size and industry verticals. “The most important thing is implementing the practices and not the mechanisms used to do so. For example, one organization might automate a particular step, while another might use manual processes,” the paper says.

To that, Souppaya adds that the intent is for the SSDF to be “customized by different sectors and individual organizations to best suit their risks, situations, and needs as organizations will have different software development methodologies, different programming languages, different toolchains, etc.”

Migues agrees that flexibility is important—that getting the job done is more important than how the job gets done. But he said many organizations might not know the options for the “how.”

“What’s missing is workshops—something like a session at RSA with CISOs—that would guide people through how to do it based on their needs,” he said. “Just because I buy a Julia Child cookbook doesn’t mean I can do the recipes if I don’t know how to cook.”

Something along that line may be in the works. The authors of the white paper call it “a starting point” that they intend to expand to cover topics such as “how an SSDF may apply to and vary for different software development methodologies.”

That future work, they said, “will primarily take the form of use cases so the insights will be more readily applicable to certain types of development environments.”

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.