Credits: Groundreport

Credits: Groundreport

For programming, you need to know some programming language. Since the world does not rely on only one programming language, you have a huge number of programming languages to choose from. The biggest challenge for a new programmer is which programming language he/she should pick. You don’t really know which programming language you should pick because you don’t have the answer to the “why”. When you ask the experts, they will often ask you to pick the language that they are experts of. Since they have been playing around with their programming language, it always looks easy to them.

The first thing you need to know as an aspiring programmer is that you can’t pick a language based only on how easy it is. Of course, you will be recommended to pick an easy-to-learn language first, but it does not mean you will base your career on that language. Easy programming languages are often recommended only to encourage you to be a programmer. When the starting path is easy, it is more encouraging for newcomers to go forward and learn more. No matter how easy your first programming language is, you will have to learn the advanced and more difficult programming languages later on.

Now, there are many ways to pick your first programming language. One of the biggest reasons of picking a language is salary. You want to be a programmer because you want to make good money. There are certain languages that will pay you more than others. Some of the most recommended programming languages when it comes to high salary are Python, Java, C# etc. If you are good at programming using these languages, you can literally enter the 6-digit figure with your yearly earnings. With programming in Java you can easily make close to $80,000 every year.

Another way of picking a language for programming is based on the application of the language. You have to realize that different programming languages have different strong points. Java is perfect for making mobile applications but when it comes to creating web applications and web development, you will always be recommended to learn PHP and JavaScript. Note here that JavaScript is not Java and they are not connected to each other in any possible way. C++ and Python pare pretty strong when you are interested in creating gaming applications. If your interest is in big data and its analysis, Matlab is the language for you.

You would also want to pick a language based on your geography. Yes, your geography can have a great impact on your earnings as a programmer. For example, if you are an expert with Python and Ruby, the best place for you to find work is California. This is the state with the highest demand for the programmers in Ruby and Python. You will not be demanded as much if you are in New York or Virginia. The programming language that stays equally popular in all the states mentioned above is Java. In short, you can’t go wrong with Java.

It would benefit you to know that certain languages are becoming more popular with time and future might belong to them. Python is another language alongside Java that you might want to learn if you want to be a future proof programmer. Keep in mind that giants like Facebook and Google are using Python for most of their works. Among C++, PHP and Java, Java remains the only language that has maintained good popularity for decades. Lastly, you can learn multiple languages if you have passion for programming and you don’t mind spending time learning new languages.

JFrog Artifactory Extends The Universe with PHP Support for Developers

Credits: Prnewswire

 

SUNNYVALE, Calif., Oct. 20, 2016 /PRNewswire/ — JFrog announced its support for PHP package management in today’s release.  JFrog Artifactory, the Universal Repository Manager, is an all inclusive package solution and supports all binary artifacts including Maven, Docker, npm, PyPi, Nuget, YUM, Debian and many other packaging formats.

With this new support, PHP developers can easily store their local PHP Composer packages in Artifactory, and also proxy other PHP repositories like Packagist. PHP Developers can also utilize the full strength of what JFrog Artifactory has to offer, including the ‘universal’ ability to use PHP along with other package types in one system of record.  JFrog Artifactory also provides support for rich metadata around an artifact, and PHP developers can leverage the Artifactory Query Language (AQL) to query this data and uncover information around each of these artifacts.

“JFrog believes in providing freedom of choice to our developers,” said Dror Bereznitsky, VP of Product at JFrog. “Our universal support in JFrog products dramatically speeds up the development process by enabling users to manage all binary artifacts equally well, regardless of the programming language, technology or CI server used to build them. Combined with our open APIs, our products integrate seamlessly into the developer’s continuous integration environments.”

JFrog will continue to expand its universal support with more releases planned for this year.

More information about JFrog Artifactory PHP support can be found on the JFrog Wiki.

Resources

About JFrog
More than 3000 paying customers, 60,000 installations and millions of developers globally rely on JFrog’s world-class infrastructure for software management and distribution. Customers include some of the world’s top brands, such as Amazon, Google, LinkedIn, MasterCard, Netflix, Tesla, Barclays, Cisco, Oracle, Adobe and VMware. JFrog Artifactory, the Universal Artifact Repository, JFrog Bintray, the Universal Distribution Platform, JFrog Mission Control, for Universal Repository Management, and JFrog Xray, Universal Component Analyser, are used by millions of developers and DevOps engineers around the world and available as open-source, on-premise and SaaS cloud solutions.  The company is privately held and operated from California, France and Israel. More information can be found at www.jfrog.com.

Alarm bell and programming iconsPHP is one of the most common languages on the web, so as a developer, it helps to have it in your tool kit. You don’t have to know it perfectly to dive into the language—PHP is similar to C and Java in some ways, so if you know these two languages, you can jump into it more easily. However, when learning any new language, chances are you’ll make some mistakes as you’re getting up to speed. Here’s a list of the most common mistakes PHP developers may face and ways to help avoid them.

1. Not Securing SQL Code

Some of the top cyber attacks on the web are SQL injections. In a SQL injection attack, a hacker will insert SQL code you haven’t authorized into your database, causing it to execute commands like leaking, altering, or deleting data. However, there are ways that better PHP programming can minimize the risk of SQL injection attacks.

PHP is the backbone for several out-of-the-box solutions such as WordPress. When writing new extensions and plugins for WordPress sites, developers will likely create inline SQL statements. These statements are built from the front-end and sent back to the SQL database. If these statements are malformed, you run the risk of leaving your site open to SQL injection.

There are two ways to avoid this. The first way (and the most preferred) is by using prepared statements. The second is by using parameterized queries.

The following statement builds on user input from a form:

$stmt = ("SELECT * FROM users WHERE firstname = '".$firstname."';");

This might leave your site vulnerable since it leaves your site open to SQL injection. A safer bet is to use parameterized and prepared statements like the following:

$stmt = $dbConnection->prepare('SELECT * FROM users WHERE firstname = ?');

$stmt->bind_param('s', $firstname);
$stmt->execute();

These are better methods because the tick mark the opens and closes a string value in SQL is processed as a literal and not an opening or terminating character.

2. Suppressing Errors

PHP has different error levels, but you can manually suppress them in your code. This is useful if you have errors that aren’t critical and don’t cause any serious effects. For instance, you could suppress warning messages regarding PHP versions.

The “@” symbol is used to suppress errors when you don’t need them, but use it with caution— it can sometimes cause some unforeseen issues. Suppose you have an include file that isn’t necessary when running the application. It could be optional for users who only have a specific component in their browser. In that case, you could use the following code in your PHP file:

(@include("animation.php"))

In the above code, even if the animation.php file has errors, they will not be displayed or logged. This error suppression should be used sparingly as you can have errors that aren’t being logged and won’t be found until something critical occurs in the application. In the long run, it’s better to handle errors rather than suppress them for convenience.

3. Printing Data Directly from User Input

This mistake is somewhat directly related to the first mistake we listed. The first mistake—not securing SQL code—can lead to SQL injection security flaws. This mistake references cross-site scripting (XSS) security flaws that can occur when the developer prints data directly from a user.

Suppose you have a form input text box named “firstname.” You want your script to display “Hello, $firstname” to the viewer. You can do this using the following code:

Welcome <?php echo $_POST["firstname"]; ?>

However, what happens if a user inputs “<script>alert('hello');</script>“? This might seem like a minor, insignificant annoyance that no one would bother doing, but the problem is that you’re allowing JavaScript to run indiscriminately in the browser. When JavaScript can run on the browser from user input, an attacker can use XSS to perform any number of events such as stealing passwords and sessions. A hacker could get very creative with the script and perform a number of attacks including session hijacking, phishing, and sneaky redirects.

Instead of printing user input, make sure you scrub any HTML tags out of the output, especially script tags. This will prevent rogue JavaScript code from running on your user’s computer. This type of attack is called an XSS attack, and it allows the attacker to run JS code that could potentially put the entire application at risk.

4. Don’t Forget to Remove Development Configurations

It’s important for any developer to have a development environment—a staging environment that mimics the production environment, which houses the live code. In some cases, a developer might be rushed and forget to remove development variables and configurations, then upload these by accident to the production environment. This can be a disaster for a live application.

Many new developers try to skip the staging environment and go straight from development to production in an effort to save time. This is a mistake because staging can help you identify problems that you didn’t catch in development (remember, staging mimics production). If you accidentally forget to remove configurations or don’t find bugs until staging, you can still catch them before they hit the production environment.

Always have a staging environment, and use it even if you’re just making minimal changes. It’s also a good idea to have QA testers test the code in staging before it’s moved to production.

5. Accidentally Using the Assignment Operator Rather Than the Comparison for a Condition

It’s easy to accidentally use the wrong operator when writing condition statements. After all, developers can spend several hours assigning values to variables. However, if you accidentally use the assignment operator instead of the conditional comparison, you run the risk of introducing bugs.

Take this code for example:

if ($condition = 'value')
//do something

In the above code, the developer mistakenly assigns the value “value” to the $condition variable. The condition should read like this:

if ($condition == 'value')
//do something

To avoid this type of mistake, some developers prefer to use “yoda syntax.” Yoda syntax switches the order of the condition and value. This is what the above code would look like in yoda syntax:

if ('value' == $condition)
//do something

Now, if you accidentally use the assignment operator instead of a comparison, the compiler will give you an error and you can correct it.

6. Forgetting to Run Backups

It might seem like an easy step, but many developers have poor backup practices. You don’t need to back up every hour, but you should run backups each day if you do significant work on a project. Just remember that your backups save you hours of recoding should you lose your data in the event your drive fails.

If you have a difficult time figuring out a problem in your code, back up the system so you don’t lose the solution—and hours of work—and have to recode it. A backup can also save you from missing a deadline if something happens to go awry.

You should also create backups for your clients in the rare case that a client has a critical failure and no backup. It’s a nice gesture, and you can help your client out of a potentially sticky situation.

Conclusion

Many developers encounter these common mistakes when learning PHP. It’s part of the learning process when learning a new language. Like with anything, practice makes perfect. Once you make a mistake, you can learn from it and take a course of action to avoid making the same mistake again in your future applications. Some will be critical while others will be minor, but this list can help you avoid some of the more common ones.

Read more at http://www.business2community.com/brandviews/upwork/6-common-mistakes-php-developers-avoid-01705208#ul2zYCx1oTkp3sPW.99

Credits: Gamasutra

Credits: Gamasutra

Game Design Deep Dive is an ongoing Gamasutra series with the goal of shedding light on specific design features or mechanics within a video game, in order to show how seemingly simple, fundamental design decisions aren’t really that simple at all.

Check out earlier installments, including the action-based RPG battles in Undertaleusing a real human skull for the audio of Inside, and the challenge of creating a VR FPS in Space Pirate Trainer

Who: Brooke Condolora

I’m Brooke Condolora of husband-and-wife team Brain&Brain, and I acted as artist, writer, and designer on our latest project, Burly Men at Sea.

Before games, I was a freelance graphic designer and web developer. My husband David and I did occasional creative projects together, like short films, until we finally stumbled onto the idea of making a game. We started developing our first, a tiny point-and-click adventure called Doggins, in 2012.

While working on Doggins, we gradually began to realize just how young and still undiscovered this medium is, and that through it storytelling could be pushed in so many new ways. That excitement carried us into a second project, Burly Men at Sea, which we released just over a month ago.

What: A branching folktale

Burly Men at Sea is constructed as a story-building game. In it, branching scenes combine to form a variable tale with a single, overarching theme. With each session, the player returns from one journey to set sail again, uncovering new paths for a series of kindred but distinct adventures.

Of the many elements of folklore that helped to establish what we describe as a “folktale adventure,” three contributed most to the design of Burly Men at Sea: length, tone, and moral. Designing for folktale length gave us a branching narrative with a short play cycle. Designing for folktale tone put the player in the role of storyteller, one step removed from our protagonists. But here we ran into a complication: how could the game present a cohesive moral tale when the tale itself is of the player’s choosing?

Our solution was to structure the story in a way that both beginning and end were consistent, with the player’s influence shaping all events in the middle of the story in a way that developed a single theme.

Why: Adventure for adventure’s sake 

Burly Men at Sea was always a game about adventure, but it took us about a year and a large-scale adventure of our own to learn what exactly we wanted to say about it—or, ultimately, what it wanted to say to us. One of my earliest notes on the game, dated a full year before we even began development, describes it as “a type of interactive adventure with a moral, drawing from Scandinavian folklore.” While this is a perfectly accurate description of the game in its final state, we continued to explore what form that would take for the next two years.

We began with a far more traditional branching narrative in mind. Dialogue choices would carry the player through a series of trials and back, culminating in a lesson about choices made along the way. But we soon abandoned this direction, drawn more to the idea of using playful interaction to advance the narrative, with a sort of Windosill-like aspect of discovery that would set events into motion. With this in mind, we began to view the player character as another of the folkloric creatures in the game: an unseen being with limited influence. As such, a player could direct the protagonists’ attention by interacting with the environment or changing their perspective. The characters might then react to this situational change in a way that moves the story forward.

Once we’d established this framework, we began to work out what sort of story we wanted to tell within it. We had chosen early on to set the story in early 20th-century Scandinavia, so I began to research the folklore of that region and consider what sort of moral tale we wanted to tell.

But I soon learned that each time I held too rigidly to a chosen theme, the resulting narrative felt forced. I tried to write a story to change minds, in which the player was always wrong. I tried to write a story that pretended to be what it wasn’t. And finally, it occurred to me that it’s actually not fun to be “taught a lesson.” What makes an ending satisfying is that it neatly puts into words something nameless you’d already begun to discover for yourself. So I began to write instead with the loose idea to explore the nature of adventure myself, rather than approach it as the knowledgeable party. Looking back, this does seem the obvious route for a story about the unknown, but letting go of intent is a scary to do.

Having set theme aside for the moment, I soon had the rough concept for a journey involving encounters with various creatures from Scandinavian folklore, all taking place at sea. My attention returned to gameplay. How would this journey unfold for the player? I began to develop the village where our characters would set out on their adventure, working from a Norwegian museum’s historical account of life in a local fishing village. As I worked and reworked the location and its inhabitants, it slowly dawned on me that this was not Burly Men at Sea. I hadn’t even begun to develop the setting that made up half of the game’s title—and worse: I dreaded it. Reflecting, I realized that in every seafaring game I’ve played, the least interesting aspect of sea travel is the sea travel. Discovery is interesting. Islands are interesting. Even battles and races are interesting. But endless water and sky quickly become tedious.

The best way to keep our journey engaging, I felt, would be to focus on the encounters as self-contained nodes, with only brief moments of actual travel between. Each node would involve a new creature encounter and, for visual interest, a change in environment (time of day, weather, surface or underwater, island or cave).

Another early choice we made while designing the narrative: there would no wrong answers to choices made in the game. We felt that any story about adventure would be best supported by a world that encouraged exploration and a willingness to take chances, and to that end, no path in the game should leave the player feeling that they should’ve taken a different route.

While we did look to a few other games for reference, the strongest influences on the design of Burly Men at Sea actually came from more traditional media. One that ultimately became the best reference for a branching story with a single ending was O. Henry’s short story “Roads of Destiny.” This was an old favorite of mine that tells the story of a man who comes to a literal fork in the road and chooses one of three ways: right, left, or to return the way he’d come. The story goes on to follow each of these branches to their conclusion, each of which result in different combinations of the same events with ultimately the same fate. It was this method of weaving the same elements into wildly different variations of a single story that guided the shape of our narrative.

After working my way through nearly forty branching variations, I settled on what became the final structure of our story. We’d begin with an intro location that would also serve as a return point (the fishing village), and the story would branch three times during the course of a single journey before meeting again at the end. Each play through would take place over the course of a day and would be designed to be completed in a single sitting.

It was then that I stumbled onto what became the most pivotal piece of the game’s design: the viewport. Inspired equally by the masking ability of fog and by vignette illustrations found in old books, this viewport could be dragged or stretched by the player to direct the characters’ attention and so affect their movement. I immediately began to brainstorm variations on shape and usage: a house shape for building interiors, a flickering ring of firelight, an arched room. These began to combine with a secondary idea of using vignettes to style our narrative scenes like the pages of a book, further pushing the game’s folktale feel. And from this new direction, a more complete narrative theme began to emerge. Our story and gameplay had begun to feel like a cohesive experience.

But the change wasn’t entirely seamless. We had, after all, just attached an entirely new mechanic nearly a year into development. I now had a structural tangle to work through, worried that the game’s use of interaction had become extraneous. Of even greater concern was the possibility that when the viewport mechanic’s novelty wore off, it would become tedious. Somehow, we had to push it further, continue to surprise the player. I sketched out a rough idea for how the viewport might evolve in later scenes, but this only further scrambled my grasp of the structure.

Then, I came across an interview with Nintendo’s Koichi Hayashida, discussing their use of the concept of kishōtenketsu, or four-act structure. In it, an idea is developed over the first two stages, then suddenly goes an unexpected direction before leading into the conclusion. That was it! I hadn’t broken the game: I’d simply lacked the framework needed to contain it.

Equipped with this new lens, I could see how each stage of the game might make the best use of both mechanics, developing gameplay alongside theme. We would use the third act to alter the player’s perception of the viewport and their relationship to the story. Finally, I was able to begin designing the third stage of encounters that would offer that twist, probably one of the most difficult and enjoyable parts of the process. Balance was the first challenge: did the branches in each stage match up in encounter type, level of interaction, use of mechanic, and length? How would each branch connect to the next stage? And ultimately, was it possible for all branches with their player-determined outcomes to join up again at our return point? Would we achieve that elusive through line?


Final narrative structure for Burly Men at Sea, as prototyped in Twine.

Result:

As I tackled these challenges over the following year and saw the narrative gradually take a final form, I took a fresh look at the design and story as a whole to see what themes had emerged. To my surprise and enormous relief, I found that what the game itself had to say about adventure was better than anything I had thought I wanted to say. It was about adventure for adventure’s sake, about following impulse down an unknown path. Incredibly, or maybe not, even our development process had reflected this discovered theme. We tried a lot of strange ideas that managed to bind together into an honest little tale that taught us how to both tell and live a better story.

 

Credits: Venturebeat

Credits: Venturebeat

Successful software development today requires the right people as much as the best code. I like to say that great people, who form great teams, have a shot at creating great software. And great companies are increasingly those that make great software, no matter what sector they’re in.

How, then, do you form a high-functioning, highly-effective modern software development team? What does this team look like? Who are the people you want to recruit? And how should they work together in the name of accelerated, scalable, and sustained innovation?

To help answer these questions, I offer five simple axioms from my experience about what makes for successful, modern software development:

1. Attitude is everything. Hiring software developers with the right disposition matters a lot. Software is a reflection of the people who develop it; personality, as much as technical acumen, is key. Hire jerks, and your software is more likely to be difficult to use, and you will lose customers. On the other hand, engineers who are nice people are motivated to create humane systems they themselves would use. They will build great software for your users because they have empathy for them as human beings. You also have a considerable advantage if your engineering team understands and internalizes the full business context in which it is operating. Any engineer who refuses to learn at least a little about product design, marketing, and distribution channels and says, “Just tell me what to build” is not going to make software that cares about your users’ needs.

2. Collaboration is essential. While you want friendly, welcoming engineers who can empathize with both their coworkers and customers, it’s clearly impossible for engineers to know everything about all areas. Engineers who see themselves as specialists are the ones who will excel and help build great companies. In other words, shared experiences and friendly communities, rather than solitary coding, increases software innovation, despite tech media’s lionization of technology giants like Steve Jobs. Jobs was brilliant, but he simply could not have delivered Apple’s stunning record of innovation without a strong team.

Engineers have to work well together, and with the rest of the company, in order to successfully innovate. We need to view software development like a Hollywood film — movie productions require the coordinated efforts of a producer, director, cinematographer, screenplay writer, editor, costume-maker and stage-hand.

3. Personal interaction is a must. Technology allows the workplace to be more distributed, so we have more individuals working remotely than in the past. While geographic barriers can’t be eliminated, we can retain a commitment to personal interaction as part of the development process.

At Chef, we use the Zoom video conferencing platform for everything from 1:1 meetings between people in different locations to “all-hands” company meetings, enabling everyone to see each other face-to-face. And we have an unwritten rule that conference calls will be conducted, where possible, with the video turned on. While it might be tempting for a CFO to rub his or her hands with glee at all the operational cost savings made possible by a distributed workforce (no rent, no electricity bill, for example), smart executives will reinvest those savings into strong collaboration tools, and, more critically, a travel budget for coworkers to regularly see each other in real life and spend time together. Sometimes there is simply no substitute for in-person shared experiences.

4. Scalable standardization is critical. Software engineers today are like 19th century blacksmiths; they each have their own tool kits and different ways of working. It’s very important that companies balance engineers’ interests in trying out new tools and languages with the potential that this experimentation will complicate product development, thereby making it more difficult to train new engineers and increasing costs related to an unwieldy, heterogenous codebase. The companies most likely to succeed today are not the ones jumping on the latest Javascript framework. They are the ones choosing battle-tested, “boring” technology. While they are unlikely to be splashed on the front page of Hacker News for their technology choices, they are the ones quietly building runaway companies. After all, Slack, one of the most successful collaboration products to hit the market in the last few years, uses the relatively “boring” LAMP (Linux, Apache, MySQL and PHP) stack that has been around since the 1990s.

5. Acceptance of complexity is vital. My last observation about software development is the most important: Complexity is a fact of life, because life itself is complex, and software systems mirror real-life systems. Why is that medical billing system so “convoluted,” after all? It’s because the rules governing medical billing are complex. Beware of anyone advocating “simplicity” as a virtue. Simplicity in and of itself is not desireable; ease-of-use and delight is. Don’t confuse the two. It’s possible to have software that can handle the most complex use cases and still be easy for someone to understand in a logical way. Simple software, on the other hand, eventually hits a wall when its architecture cannot handle complex, real-life use cases.

Software engineering generates astonishing and advanced products that are changing the world in powerful and profound ways. But we can’t forget that achieving a high level of innovation is still fundamentally a human endeavor, fraught with strong emotions and viewpoints beyond just the engineering aspects. Embracing the humanity of software development, its paradox, and, most importantly, its potential, is the only way to create great software products that your customers will love. Organizations that align software development teams to the five axioms outlined here will have the best chance at greatness.

Credits: Opensource

Credits: Opensource

Correctly installing and configuring an integrated development environment, workspace, and build tools in order to contribute to a project can be a daunting or time consuming task, even for experienced developers. Tyler Jewell, CEO of Codenvy, faced this problem when he was attempting to set up a simple Java project when he was working on getting his coding skills back after dealing with some health issues and having spent time in managerial positions. After multiple days of struggling, Jewell could not get the project to work, but inspiration struck him. He wanted to make it so that “anyone, anytime can contribute to a project with installing software.”

It is this idea that lead to the development of Eclipse Che.

Eclipse Che is a web-based integrated development environment (IDE) and workspace. Workspaces in Eclipse Che are bundled with an appropriate runtime stack and serve their own IDE, all in one tightly integrated bundle. A project in one of these workspaces has everything it needs to run without the developer having to do anything more than picking the correct stack when creating a workspace.

The ready-to-go bundled stacks included with Eclipse Che cover most of the modern popular languages. There are stacks for C++, Java, Go, PHP, Python, .NET, Node.js, Ruby on Rails, and Android development. A Stack Library provides even more options and if that is not enough, there is the option to create a custom stack that can provide specialized environments.

Eclipse Che is a full-featured IDE, not a simple web-based text editor. It is built on Orion and the JDT. Intellisense and debugging are both supported, and version control with both Git and Subversion is integrated. The IDE can even be shared by multiple users for paired programming. With just a web browser, a developer can write and debug their code. However, if a developer would prefer to use a desktop-based IDE, it is possible to connect to the workspace with a SSH connection.

One of the major technologies underlying Eclipse Che are Linux containers, using Docker. Workspaces are built using Docker and installing a local copy of Eclipse Che requires nothing but Docker and a small script file. The first time che.sh start is run, the requisite Docker containers are downloaded and run. If setting up Docker to install Eclipse Che is too much work for you, Codenvy does offer online hosting options. They even provide 4GB workspaces for open source projects for any contributor to the project. Using Codenvy’s hosting option or another online hosting method, it is possible to provide a url to potential contributors that will automatically create a workspace complete with a project’s code, all with one click.

Beyond Codenvy, contributors to Eclipse Che include Microsoft, Red Hat, IBM, Samsung, and many others. Several of the contributors are working on customized versions of Eclipse Che for their own specific purposes. For example, Samsung’s Artik IDE for IoT projects. A web-based IDE might turn some people off, but Eclipse Che has a lot to offer, and with so many big names in the industry involved, it is worth checking out.

 

Credits: Jaxenter

Credits: Jaxenter

JAXenter: What are your duties and responsibilities within the Eclipse Foundation?

Christopher Guindon: I am the Lead Web Developer at the Eclipse Foundation. I often say to members of our community that our team is responsible for everything being served to your browser from Eclipse domains on port :80 and port :443. Obviously, this is not true but we do contribute to a large portion of it!

We build and maintain services and websites for the Eclipse community. We have an extensive list of web properties that we support. At the moment, I am really proud of the work that we are doing for the Eclipse Marketplace and our new API service.

The webdev team at the Eclipse Foundation currently includes Éric Poirier and myself. We work in the IT department led by Denis Roy. You probably know Denis as one of the Eclipse Webmasters and you might have met him at one of our EclipseCon events.

It’s a very exciting time to be a web developer! Tools and frameworks are evolving very quickly and it’s our responsibility to stay up-to-date to make sure that our websites are secure, fast, modern, and user-friendly.

 JAXenter: When did you join the Eclipse Foundation and why?

Christopher Guindon: I always had a passion for open source and I knew at an early age that I needed to work for a company that shared my same values and beliefs.

When I finished school, I started working for inter-vision.ca. At the time, it was a small startup that aimed to help not-for-profit organizations with open source. It was a great experience for me because I got to see first hand how a business could make money and benefit from open source. This was at a time where a lot of businesses, organizations, and governments were still hesitant to adopt open source software, mostly because they didn’t really understand it. As I became aware of all of this, I came across a job posting from the Eclipse Foundation for a web developer position. At that very moment, I knew that this was going to be my dream job. I could pursue my love for writing code and contribute to the awareness and the benefits of open source at the same time. This was a win/win opportunity for me.

It’s been five years since I started working for the Eclipse Foundation, and I still believe that this is my dream job. I love writing code, I love working with everyone at the Foundation, but most importantly, I love the Eclipse community.

JAXenter: Which project(s) do you like most? 

Christopher Guindon: I support and am passionate about all of the projects at the Eclipse Foundation but I do have a soft spot for the Eclipse Marketplace Client (MPC). MPC allows Eclipse users to discover and install Eclipse solutions directly into their Eclipse installation. The solutions shown in MPC are tailored to each user based on their operating system, Java version, and Eclipse release.

JAXenter: What does the future of Eclipse (and the Foundation) look like? 

Christopher Guindon: Back in the early days, we used to focus on Java tools, but the Eclipse Community has grown to encompass a lot more than Java alone. Now, we also have Eclipse Working Groups that are involved with geospatial, scientific research and Internet of Things (IoT) technologies, just to name a few. I hope that the community continues to expand and that the Eclipse Foundation remains the home of choice for innovative open source projects.

JAXenter: Finally — Eclipse Neon. What is your favorite feature?

Christopher Guindon: PHP development tool is one of the reasons why I use and love the Eclipse IDE so much. Our team writes PHP code on a daily basis, so it’s really great to see that it fully supports PHP 7 now.

I am very happy with the new features from JSDT (JavaScript Development Tools). They now support two package managers, Bower, and npm. They also included support for Grunt and gulp tasks. Tasks are now accessible from the project explorer view and can run via launch shortcuts.

I also love the Oomph project which allows users to manage personal preferences in their Eclipse workspace. Oomph is a very exciting project for us because it’s the first project to adopt the new Eclipse User Storage Service that our team built last year. The Eclipse User Storage Service (USS) allows Eclipse projects to store user-specific project information on the Eclipse Foundation servers. The goal is to make it easy for our projects to offer a better user experience by storing relevant information on our servers.

Credits: Theregister

Credits: Theregister

Online commerce giant Alibaba is among a crop of “new world” Java users seeking to shape the direction of both language and platform.

Alibaba, one the world’s largest users of Java, has entered the race for election to the ruling executive committee (EC) of the Java Community Process (JCP). Jack Ma’s ecommerce giant joined the JCP only three months ago – in August.

Also running for election to Java’s steering group are representatives of end user groups from China, Africa and Germany. One, the GreenTea Java User Group (JUG) in Shanghai, was founded and sponsored by technical staff from Alibaba.

Martijn Verburg, London JUG co-leader and jClarity chief executive, told The Reg that such diverse and new representation is a positive sign for a strong and growing Java. Also, it reflects a desire from the EC to attract more interest from outside the familiar pool of big US and European corporations.

“Java continues to show that it has broad appeal, attracting influencers that want to shape its future from a wide range of organisations,” said Verburg, who is also an EC candidate.

JCP chair Patrick Curran in May called the prospect of Alibaba’s membership “a very positive step, given its leading role in China, and indeed globally”. He believes that if Alibaba joins, other Chinese firms might follow suit.

It’s not just regional representation that Alibaba would bring; the commerce giant runs Java at massive scale and could have fresh input on web-scale Java. Alibaba reckons it has identified areas that could be redesigned to better serve its needs.

The firm claims in its election statement to have a world-beating Java footprint with “tens of thousands of machines running Java every second serving insurmountable numbers of web pages and transacting Alibaba’s full spectrum of services, including the world’s largest online ecommerce marketplace.”

Areas for change include that age-old hurdle to real-time performance – garbage collection, deployment plus just-in-time compilation.

“The environment we are facing is mostly a web server one, where short sessions come and go very quickly. We once again firmly believe that JVM/JDK/middleware may accommodate better by assuming certain behaviour and workload with web server characteristics. We therefore think joining the committee will help us express that,” Alibaba said.

Alibaba’s candidate is JVM team lead Haiping Zhao.

Should the big boys vying for a coveted seat be elected, power looks to be passing ever so lightly to the ever-important user groups.

Aside from Green Tea there is iJUG, an umbrella of 30 user groups in Germany, Austria and Switzerland founded in 2009. iJUG wants the JCP to be more open and democratic. This includes taking action on JSRs, should spec leads go missing.

That should be seen as response to the fact Oracle’s JSR leads stopped working on Java EE this year to concentrate on development of the vendor’s cloud instead – a fact that has seen Java EE 8 delayed and in the resulting frustration bred MicroProfile.io.

Also in the running are representatives of the Morocco JUG for the associate seat.

The JCP consists of 25 members but it’s the EC that guides development of Java. It approves and votes on all technology proposals, in addition to setting the JCP’s rules and representing the interests of the broader JCP community.

The EC consists of 16 ratified seats, six elected and two brand-new associate seats, with Oracle holding a permanent position. The EC ballot closes on 14 November with the results announced the next day.

Credits: Jaxenter

Credits: Jaxenter

In the 1990s when Java began to appear on the developer scene it met its competition from Microsoft head-on in that it struggled to gain an acceptable place within the international development community. However, over the years Java appears to have arrived at more stable set of infrastructure and development standards than Microsoft appears to offer now. This is the result of both communities taking maturation trajectories that were in a very real sense diametrically opposites of each other. Microsoft, at the time, was offering maturing technologies while Java was the “new kid on the block”. Both communities also offered completely different viewpoints towards their product developments. Microsoft offered products with “ease of use” as the underlying factor allowing developers to quickly create both desktop and web applications far more quickly than competing solutions. This was especially true when compared against the new Java tools. However, Java had its founding basis in the academic and scientific arenas.

One could see this easily with the many Java articles that took apart various interface controls that we in the Microsoft Community took for granted with the exception of third-party control developers.

The Java Community did have a difficult time to get its place as an accepted form of development until it presented an alternative to Microsoft products in the large enterprise development area, which was Microsoft’s weak spot since Microsoft products at the time were targeting division and department level applications. Once Java tools became more usable for developers, the large enterprise arena saw the advantages of Java development with its better-suited enterprise offerings (ie: J2EE, and later, the Spring Framework).

Today, we find a Java Community that appears to be far more stable than that of the Microsoft Community though one would not know it with the subdued reporting around it. Years ago such reporting was quite different as the two communities fought each other for developer supremacy Reading online magazines such as jaxEnter.com today, most of the articles appear to concentrate on existing technologies along with their refinements. The opposite appears to be true for Microsoft, which seemingly approaches product refinement the way the US Pentagon approaches new weapons procurement; both throw out perfectly fine technologies and start over with brand new and completely untested concepts.

In this author’s opinion, and as one who has worked with Microsoft technologies his entire career since leaving the mainframe world around 1989, Microsoft has made some serious mistakes with not only the products they are offering but how their style of developing applications has changed over the years. This appears to have happened most egregiously with the latest CEO of Microsoft, Satya Narayana Nadella, while under Steve Ballmer; Microsoft appeared to be more stable in this regard despite his terrible reputation.

With desktop applications Microsoft has now two types of development offerings, the original “Windows Forms” model and the more current and recommended “Windows Presentation Foundation (WPF)”, the latter being far more difficult in terms of styling and designing sophisticated interfaces. One of the overall concepts behind WPF was to make its development similar to web development and in that regard it has been quite successful. However, the complexities and the rather disjointed documentation have made this style of development difficult for many developers.

Web development is a somewhat different story with Microsoft, and for those who have read my pieces on it, you all know that my opinion on it is rather negative in that it is going in too many directions. Originally designed with the “Web Forms” model, which no doubt had its own set of drawbacks, this model had been exceptionally easy to create rather complex web applications despite the flaws inherent in its internal design. It still is.

Somewhere along the line…

Somewhere along the line some senior technical personnel got it in their heads that suddenly the entire Microsoft Community wanted what the Java Community had in this regard, the MVC pattern; also a possible result of the terrible infighting that has been reported regarding Microsoft product groups. As a software engineer using Microsoft’s products at the time I kept well abreast of the issues facing my developer community. Interestingly enough, I never found any such demand by its members. In fact, when ASP.NET MVC was first announced I instead found a rather substantial negative reaction to it.

Despite this I saw nothing wrong with such a product development. However, there was absolutely no need to turn the entire Community upside down to provide this new paradigm since those developers that did want to use it already had an Open Source alternative provided by the Castle Project’s, “Monorails”. This unique and capable offering provided MVC development for Microsoft’s web development exactly in the same fashion that Microsoft introduced its first version of ASP.NET MVC. In fact, if you did a comparison of the installations you would have found that the solutions created were exact replicas of each other, leading one to wonder if Microsoft’s first introduction of MVC was in fact a forked version of the Castle Project’s, “Monorails”.

Nonetheless, unlike the Java Community that maintained its initial toolsets and used them to successfully enter into the enterprise, Microsoft, though offering quite a range of quality tools did not provide a similar scale of constructs that could be used for all levels of development. It is true nonetheless, that with its introduction of the .NET development environments, which would become Java’s fiercest competitor, Microsoft did offer such distributed computing capabilities such as the Remoting Framework (and then later the “Windows Communication Foundation (WCF)”), which was supposed to be an answer to Java’s more mature J2EE implementation (Java would offer the alternative of the Spring Framework due to J2EE’s initial performance and configuration issues). Despite Remoting’s capabilities which were rather extensive at the time, the psychology of its usage did not reflect a similar strain of thought found in the Java Community. So while the Java Community’s enterprise level tools were all entwined with the ability to develop Java desktop and web applications, Microsoft’s offerings were rather too disparate to offer such a level of psychological cohesiveness.

This was an unfortunate failure on Microsoft’s part since .NET, even in its earliest configuration had a great deal to offer to both developers and organizations looking for modern rapid application development techniques, something the Java Community was not as equipped to offer. Nonetheless, .NET’s ease-of-use development features were as well designed as Java’s more extensive enterprise offerings. They not only offered developers a great way to create applications but also maintained a consistency for third-party companies who wanted to revise their tool offerings.

Over time the two communities began to mirror each other’s capabilities to a certain degree.

With “Windows Forms” and the newer “Windows Presentation Foundation”, Microsoft more than equaled itself with Java’s capacity for development of desktop applications through the use of Java, AWT and other tools. For web applications, Microsoft offered “Internet Information Server” as its standard application server while the Java Community relied on “Apache” and then later “GlassFish”, both excellent servers. For actual web development we had ASP.NET WebForms while Java had it’ “Java Servlets” and its MVC development paradigm; the former offering a one-to-one page process (web page to code-behind) while the latter offered a one-to-many page process (web page to multiple controllers). Over time both became accepted development methodologies for their developer communities. And though many in the Microsoft Community submitted complaints about certain difficulties with “WebForms”, no one to my knowledge was making any recommendations that it should be thrown out with the bath water. Unfortunately, Microsoft has developed a rather bad reputation for discontinuing very solid products. Though it has not announced any plans to discontinue “Web Forms” support, one can easily see that the emphasis is increasingly on the ASP.NET MVC platform. This is also true between the older “Windows Forms” platform and the newer “Windows Presentation Foundation”, though neither has been responsible for the amount of near hysteria that one can find over the use of ASP.NET MVC and its associated tools.

As an example of discontinuing fine products, Microsoft developed the desktop “SQL Server CE” desktop database engine. Coming in at around two megabytes for its footprint, it is a very easy to use engine for SQL Server developers and supported both 32bit and 64bit machines. As major SQL Server development continued, Microsoft decided to discontinue this highly versatile database engine in favor of “LocalDB”, which has around a 14 megabyte footprint on the desktop. Needless to say, this database engine is hardly on anyone’s radar as many have opted for the use of SQLite or Firebird.

With Microsoft continuing with its own refinements for the “Web Forms” and “Windows Presentation Foundation” environments, these models have come a long way and have matured into a very solid form of software development. They are both stable and offer a tremendous amount of flexibility for organizations that need to develop desktop and web applications quickly, efficiently, and with modern styled interfaces. At this juncture, Microsoft more or less offered as stable a platform as that of the Java Community. To be sure both communities offered different styles of development but both styles were mature and could produce whatever applications a business organization may require. And with Microsoft’s increasing focus on their “Data Center” servers, enterprise level development was finding a good home with Microsoft as well.

Nonetheless, with such success in both camps something went wrong on the Microsoft side of things. For the Java Community, which was quite open with their use of Open Source software, it was surprising that the basic tenets of software development never really detoured into the hype and clap-trap that has redefined the Microsoft development community. Much of this detour was a result of the ASP.NET MVC paradigm. And the reasons promoted within the Microsoft Community for MVC over “Web Forms” or WPF over “Windows Forms” are for the most part complete nonsense since all of these development models do exactly the same things; they either create web applications or desktop applications. One could argue that with emergence of mobile computing, ASP.NET MVC was a better solution but so far there has been no evidence of this in terms of Microsoft development.

The problem is that for Microsoft, which is much a marketing company as it is a software vendor, there has never been a recognition for the simple fact that development for business applications has matured to a point that there is little more that can be accomplished in this arena as it regards the development of new tools that will offer something substantially unique that it could actually promote a new form of development style. Unless there is some drastic change in the foundations that our hardware and equipment are created with such a radical change is just not going to happen.

Microsoft simply failed to understand that what they have developed is perfectly fine for developers, which reached an equivalent level of maturity as with the Java Community. And since Microsoft saw things through the lenses of a marketing organization (Why else would they make Steve Ballmer, a salesman, the CEO of the company when Bill Gates left?) they seemed to need the “next new thing” that would possibly get Java developers to convert to the Microsoft product line or encourage youngsters, teenagers, and college students to use their tools. Unfortunately, for all their efforts Microsoft appears to have failed in these veins since I have seen far more developers making comments about how they left the Microsoft Community than the other way around. And possibly this small exodus may have been a result of the growing ambiguity in development choices that Microsoft was offering. As to teaching youngsters to code; I have never heard of something so ridiculous. Like with any technical profession, there first has to be a certain level of talent for it to be of any use. Besides, Microsoft always has hidden agendas and in this case it is to develop a new level of cheap labor, not develop “critical thinking” skills.

All development paradigms are equal

All of the paradigms work efficiently and all have their advantages and disadvantages since in reality, there is no such thing as efficiency in development. For every technique implemented there is a disadvantage that has to be lived with. Thus, development decisions are based primarily on what a developer feels most comfortable with as well as what they know. Despite the technical attributes and the science behind them, for the most part such decisions are quite subjective in nature. And considering that most technical managers could care less with the exception that their technical choices will not get them fired, I doubt that there is any serious concern on their part as to the merits of any particular technique for development.

I enjoy working in VB.NET; so shoot me!  I can work in C# just as efficiently and do but I came up from the DBase world in the late 1980s and feel most comfortable with 4th generation like syntax. We did some great things back then with the DBase environments and it’s somewhat of a shame that they were relegated to the dustbins of history.

Today, in the Microsoft Community, we find that we are experiencing with their tools what the Java Community experienced when it first made its emergence in the industry; concentration is on tools, techniques, and gimmicks, instead of quality application development without regard for the tools used.

Nonetheless, mature solid software development tools are not easily disposed of. A good example of this is the longevity of the main 3rd generation languages of Visual Basic, C#, Java, and C++. All of them do everything a developer requires for his or her work and have done so for quite some time; everything else, for all intents and purposes, is mere distraction and fluff. It is true that there are certain languages that have been developed for specific purposes such as Modula III, Prolog, and some more of the recent dynamic languages but for the most part the primary development languages have remained the forces in software engineering for many years once the mainframe saw its demise and will probably remain so for a long time to come.

Microsoft application development — The original and mature style of development

Once Microsoft released “Windows Presentation Foundation” in 2008 it had come to the point where it could promote a single style development for both desktop and web applications. Had they promoted such a unified environment, while refining it to support the emerging mobile environments, our side of the fence would have been far more stable in terms of development issues since our knowledge bases would have had far more depth.

As it regards these environments there are many “technical” advantages for using MVC (web), MVVM (desktop), and the like but in the end none of these advantages provide much in the way of benefit to the majority of organizations who require application development.

For example, many developers support the idea that using such paradigms can make testing far more efficient since developers can use a variety of testing frameworks and test the components separately from each other.

However, there is no evidence in the difference in subsequent quality by using good traditional testing techniques and using the more current ones. In combat fighter aviation there is one axiom that all fighter pilots have learned since the advent of fighter aircraft; it is the pilot behind the stick and not the airplane that makes an engagement a success.

In both styles of testing one has to take time to test properly. Traditional testing may be more of a manual effort but the extra time taken in newer forms of testing (“Test Driven Development (TDD)) will be applied to the additional development of testing code. And bad testing can occur in both testing models.

In fact, with traditional testing it is less acceptable to allow defects into production environments since that is the intent of such Quality Control. However, I cannot count the number of articles I have read where it is now quite common to discuss the allowance of defects into production processes with TDD/Agile since the justification is that it will be found quickly and can be easily rectified through automated deployment processes. Tell that to an end user of such application when something goes annoyingly wrong.

The style of developing applications with Microsoft tools without the MVC and MVVM paradigms is still based upon the single middle-tier module that captures and responds to user requests. In standard “Web Forms” (ASP.NET) this module came to be known as the “Code Behind” and is more or less called the same with the newer “Windows Presentation Foundation” (WPF) desktop applications. Surprisingly, Microsoft didn’t seem to come up with an acronym for this module in this environment but it is more or less the same thing. Thus, in both cases we can look at this type of development from the standpoint shown in the graphic below.

Common Sense Software Engineering

This graphic demonstrates how it has been accomplished historically on most development platforms but Microsoft more or less codified it in their products. There is absolutely nothing wrong with this model on the web as long as you are not continuously passing massive amounts of data back and forth; if so, performance will noticeably suffer.

The attempt to make the web more performance efficient has found developers as well as vendors coming up with techniques to avoid a complete “post-back” of such requests by a user. The result was the introduction of AJAX, which even today is the basis for all such calls to the backend that doesn’t involve a complete submission of the web form. Microsoft AJAX, despite some of its drawbacks, actually works quite nicely and is still being maintained professionally. You simply cannot beat the ease-of-use that the AJAX “UpdatePanel” affords developers.

For desktop applications, this is not as much of concern as the middle-tier module (response\request) for WPF or “Windows Forms” resides on the same machine as the interface and the user working with it.

Nonetheless, for the web, developers kept on pursuing some type of “silver bullet” that would somehow yield maximum performance in their applications. The result was that the pursuit of new tools and frameworks became a guidepost for Microsoft developers in terms of their skills and capabilities. Microsoft foolishly followed suit instead of concentrating on the refinement of their core products. The Java Community, apparently rather satisfied with what they have accomplished has found little need to rush into unknowns like the Microsoft world.

The reality is that on the web, as system engineers wrote years ago, performance is a matter of hardware and configuration and not so much code, though you can do things to make the responsiveness of an application better. In addition, in large scale applications such as are found on the web or very large client-server applications on the desktop, the pursuit of performance is largely an illusion since the overriding concern in both types of applications is “concurrency”, which will exclude the ability to develop maximum performance. What you want is acceptable performance when large numbers of users are accessing the application.

True N-Tiered development

In terms of development, many Microsoft developers continue to make a mess of how they handle the “code behind” modules by loading them down with just about all the necessary processing required for each interface form. The same has been done with “Windows Forms” and WPF development. And undoubtedly this happens just as often with Microsoft’s version of MVC. This type of development causes inherent bottlenecks to the applications since such modules were actually to be used as “dispatchers” to other modules that would handle the actual business logic and backend processes. Thus, the actual construct of this style of development would look as follows…

1

The process modules in this case then represent what we used to call the middle-tier while the interface and request\response (code-behind) modules represent the frontend tier, with the request\response modules being implemented on an application server in the case of web applications or the same machine in terms of desktop applications.

The process modules were also known as business objects where the data being passed back and forth to the clients would be manipulated through calculations, text transformations, among other types of algorithms. Specific modules that could be shared across the breadth of the process modules were designed to implement specific but more or less generic forms of processing such as database access, security authentication, and common string manipulation functions to name a few…

2

Finally we have the backend tier, which is comprised of the database.

3

To many older developers this should be common knowledge while for younger Microsoft professionals this should have been the proper way they were taught to develop high performance applications. Like a “Porsche”, “There is no substitute!”, no matter how you slice or dice it.

True N-Tiered development; The infrastructure is more important than coding

However, even if this coding infrastructure is set up efficiently and cleanly, a heavily accessed site will most likely not yield the performance gains one was hoping for. Even if the application is developed with ASP.NET MVC instead of “Web Forms”, the many controller calls from the router may provide a severe bottleneck given that the routing has been the historical point of contention with MVC since this aspect of the MVC framework has been historically based upon “Reflection”, which is not the speediest process on the planet.

Since I am not a Java developer I do not know how this community circumvents these problems. However, being a community that grew out of a more sophisticated background than we Microsoft developers I would guess that the next recourse is fairly common with large Java applications. However, the Java Community did incorporate servlets that were in a way, mini-services that would process requests and responses on the middle-tier. It was quite ingenious and still works quite nicely.

Back in the 1990s before the rise of popularized manifestos, MVC, JavaScript, and the like, the idea of actual distributed computing was a foundation for true performance in either web or client-server applications. In fact, one of the best developer manuals on this subject was actually written by a chemist turned programmer who developed high-performance web and client-server systems for the State of North Dakota (It may have been South Dakota, I don’t remember.) that had over 1500 clients hitting the central systems at any one time.

This scientist worked out the framework for a hardware\database configuration that even today simply cannot be beat for large scale applications. He took the standards of the day and designed the framework in its entirety, measuring performance benchmarks for each level of the configuration until he had refined each of the levels to an optimal efficiency.

However, with the rise of ever faster micro-processors these foundations were quickly overshadowed due to budgets that would also become defined by the rising outsourcing mania.

And yet, it was these foundations that provided the concepts of distributed computing other than the fact that such systems supported many remote clients.

To return to the above graphic, each of the various levels of processing to process optimally could not rely on a single machine to do so. However, that is what many IT organizations do today. Nonetheless, larger organizations do provide very powerful application servers, which encourage the use of single machine deployments.

Though such deployments may yield satisfactory results, unless the server is very powerful, such as with a mainframe or a set of high-powered UNIX machines, the scalability of such configurations will be finite.

To work with this, developers have turned to focusing on their code as the area where performance bottlenecks can occur. In some cases, increased performance can be accomplished by making coding refinements, but in reality, not all that much. This is because true performance is based upon hardware and not software. Thus, no matter how efficient one makes their code it will only run as fast as the machine it is deployed on can process it. The more clients a machine has, the slower the performance will be as the single machine will have to work increasingly harder to process the work load.

To work with such a limitation, developers began implementing threading operations in some of their applications, which again is mostly based upon the hardware available. An early document suggested a maximum of ten threads per microprocessor with an optimal recommendation of a single thread per micro-processor. With machines that could incorporate multiple microprocessors this seemed like a substantive idea. However, how many organizations provide 32\64 core machines for their applications and how many of those machines are set aside for a single application? I would suggest not many.

The result was that threading found its niche by providing flexibility to the interface, which is what one of its intended purposes always was. And threading does work very well when implemented properly, you just have to be judicious about it.

Within this type of implementation we also have found the rise of asynchronous programming. However, this type of processing only shines when an application has a long running task that can be placed on a background thread since the data is not needed immediately. This style of programming is not really about performance but again about flexibility.

Again getting back to the previous graphic, what we need to get maximum performance out of an application that can be easily scaled is also something that very few organizations will opt for since it takes time to design, test, and has its own expenses to implement. Nonetheless, to do this we start with a basic hardware configuration as shown below…

4

Before we move on, please take note of the database section in the above graphic. You will see two types of database engines, OLTP for transactional processing and OLAP for query processing. Why do we need two different types of database engines for a single application? For rather small applications we don’t but for applications that need to scale now and possibly in the future this is the standard for setting up the backend tier. OLTP databases are most efficient when used for transactional processing while OLAP databases are optimized for querying. Though OLAP database engines are touted for Business Intelligence systems, the reality of these systems is that such systems are optimized mostly for querying in addition to their numerical functionality.

To ensure that each database system is kept up to date in terms of its data, the OLTP database engine is designed to carry any updates over to its OLAP sibling.

By splitting out database systems in such a manner you can actually increase the performance of the backend by over 50% by allowing each database engine to do what it does best.

Finally, which demonstrates the increased cost of such an implementation, each tier is composed of multiple machines and the number of machines increases as we get closer to the frontend tier (interface). The scientist who worked out the performance benchmarks for each tier in his own designs found that as a process goes further down into the tiers of this infrastructure, the less amount of time it takes for the lower tiers to process. This is because each tier of machines has to actually do less in terms of responding to a user request. As a result, this type of infrastructure would look like the updated graphic below…

5

This latest graphic then demonstrates the proper configuration for a large scale deployment of a high concurrency application.

Each level of this infrastructure can be linearly scaled out by adding additional hardware components as each level would be made up of machine clusters.

The response to this type of infrastructure by many would be that it is too costly and time consuming to design and implement. And they would be correct. However, this is the proper technique for implementing high-performance systems and there is very little that can enable one to get around this.

The hype today, especially in the Microsoft Community, is that we can build such high performing applications with the use of scripting languages as well as the use of the MVC paradigm with C# or VB.NET. This is simply not true that this is all that is required but it makes for good marketing. Good application development not only requires clean and efficient code but an infrastructure that can provide the power that the code can take advantage of.

Performance-based code is highly overrated

No doubt the majority of Microsoft organizations do not follow the aforementioned standard due to the usual reasons; too much money, too much time needed for the design, both of which I mentioned previously. Companies like to work on the cheap and so the network and hardware people think of ways to combine applications on to existing hardware, which as soon as is done cuts the performance of the heavily used applications significantly. Such combining of applications is fine for low or even some medium use applications but overall, where performance is concerned, such combinations will only hamper the response times.

The last excuse organization people love to use is the “perfect world syndrome”. “Well, in a perfect world we would have this.”. The problem is that no one even tries to get near anything resembling credible infrastructures except for the technically savvy organizations.

The result of this derailing of actual n-tiered development has fostered the idea that applications can be made to perform very efficiently through refinement of code. For the most part this is utter nonsense but in the attempt to produce such performance technical managers have put terrible pressures on their developers to refine their code bases to the nth degree saving only milliseconds in time. This is not to say that you cannot find substantive performance enhancements in refining existing processes but it cannot be done simply through a refinement of existing code bases.

Years ago I worked with someone who thought he was the “gift to object oriented programming” and only he knew how to develop code properly. I am quite sure that everyone reading this piece has experienced a person of this nature at one point (or many) during their careers. There are plenty of these “fools” out there.

This fellow in particular just about cursed me out for a set of database queries I had written and placed as singular selects in a store procedure. This fellow promptly redid my stored procedure combining my queries into a joined set of queries. I accepted his work calmly and out of curiosity proceeded to run the performance benchmarks on the original stored procedure and the updated one that this fellow claimed was the proper way to write what I was doing. I never told him the results thinking he would just waste my time even more by trying to refine his own work. Nonetheless, his work took 35 milliseconds longer than my original code.

In either case, my faster stored procedure or his slightly slower one in the scheme of things would make little difference since Humans cannot interpret time at such levels. Yet, such attention to the coding of developers over the years has become near sacrosanct in many organizations to the detriment of all and everything else involved since for the most part it is a giant waste of time; especially if the infrastructure on which the application will run is not optimized properly. Any student of modern compiler theory and internals will be able to tell you this.

First and foremost, all of the compilers in today’s business world whether they be for the source-code languages or the database systems are extremely well optimized. Even if your code is somewhat inefficient, these compilers will make up for it by providing the best optimal solution for what you are attempting to write. Does this mean you get to write sloppy code? No…

What it means is that the attention to such performance details in your coding is more esoteric in the endeavor than in actually yielding increased performance results. It means that good clean code will work well and will most likely still be optimized by the compiler. Bad code, will work… well badly but probably not as badly as one would expect since the compiler will most likely eliminate some of the issues.

Note that we are talking about working code in both situations here not code that simply is so poorly written it will not work or barely do so.

To the last, I did work on an application that had been completely coded by someone else. The coding was atrocious and many of it completely wasteful. In fact, after having been able to study and work with the application for some time I found that at a minimum at least 50% of the code was unnecessary and could be removed if the application were to be re-written. Though I never had the opportunity to do this, the application, for all its inherently poor design and coding, actually operated reasonably well in terms of user response and processing. And this included a very poorly designed database as well.

No matter what type of application you may design and implement as long as you write your code in a clean and legible manner it will work just fine under low loads of activity. If you refine your code as much as possible towards making it as high performing as possible it will still work to the same degree under low levels of load activity since the optimizations that both you and the compiler make to your work will not be able to be interpreted by Human users. If you apply the implementation to a situation where there is to be expected a high concurrency rate, no matter how well your code is written it will fail to perform properly without the proper infrastructure. It is that simple.

Microsoft developers can do better than JavaScript and all its related tools

Despite these fairly simple engineering axioms, developers have still tried to optimize their applications in various ways that attempt to work around poor infrastructure implementations.

Today, the apparent standard on the web in the Microsoft world is what we used to call the “fat client” where as much as possible is placed within the client area of an application. The thinking is now that JavaScript reigns supreme and that as long as one implements all of the necessary libraries and dependencies on a web page everything will be highly efficient.

It is rather surprising to see developers praise a language that was never designed to do what it is being used for now while at the same time increasingly develop ever more tools that make use of the same language in a way that appears to make the whole sloppy mess in some way extremely recursive from a figurative sense.

JavaScript, like all languages of its type, falls within the domain of what we now call “dynamic” languages, which means that they are completely interpretive. In turn, this means that they are in general, inefficient no matter how fast the JIT compiler may be. Their primary advantage was for quick development times and not quick performance.

The use of such a technique encourages the building of applications that use individual data calls to the server, which will respond in kind. Some of this design came out of the mad rush to eliminate as many calls to the server as possible via post-backs, which entailed the passing of undesired amounts of data.

To some degree this concept made sense. However, it appears to have been lost on Microsoft developers especially that this form of data call was rectified rather early on with “Web Forms” through the use of the “web method”, which works quite nicely. It uses the same type of call within the popular “jQuery” framework (as well as others) that one would use if the call was made to a controller in an MVC application.

So does it matter where the method may be implemented; within a single-page code-behind or an MVC controller, which may be only one of several that will receive requests from a single web page? Most likely not…

It is true that sophisticated developers have made legitimate complaints about the way that Microsoft’s ASP.NET “Web Forms” does its processing against a “one to one” web page and its corresponding code-behind module. However, many of these complaints have been addressed over the years through refinements that Microsoft has made to the “Web Forms” paradigm. And the same has been true of Microsoft’s ASP.NET MVC implementation or there wouldn’t have been so many version releases in the past 6 years. The issue is why did we ever need two divergent development paradigms for the web when the original and more mature model offers nearly identical performance power while being far more flexible to develop with?

The Microsoft community has become erratic when compared to the Java world

While the move from “Windows Forms” to the “Windows Presentation Foundation (WPF)” has made a lot of sense in the way that now both web and desktop applications can now be developed in similar styles, the move from ASP.NET “Web Forms” to ASP.NET MVC has not had a similar level of credibility despite what the pundits may say. In fact, it is this move that has had a terrible effect on Microsoft development stunting the growth of more mature technologies that worked equally as well.

In some respects some of the thinking behind these evolutions has resulted from attempts to break out the tasks involved in application development to separate groups so that one group could design the interface while another would implement and test it. This makes sense but in most real world situations does not often happen in the Microsoft Development Community. Instead, developers tend to be responsible for most aspects of application development. This is why in job postings for Microsoft developers you often see a laundry list of skills requirements for the candidates; a list so long that no one could possibly be well skilled in all of the items. If the Java Community has found the same style of job offerings than it is familiar with this problem.

Nonetheless, one of the results of these changes in the Microsoft development environments has been to take advantage of the now many free software offerings from the Open Source Community to create new tools based upon inefficient language mechanisms not necessarily out of necessity but instead from wanting to try something new. There is nothing wrong with this endeavor but when a company attempts to mold all such activities into an existing development environment such as Visual Studio you land up with 6 gigabyte installations, most of which each developer will hardly use, taking advantage of only that which interests him or her.

Microsoft marketing consistently touts the advantages to creating great applications and modern user experiences with such new and “exciting” tools. However, if you have been in the industry as long as I have, you find that the marketing has become highly redundant by repeating the same mantras that began with the release of Windows 95. Nothing ever really changes…

Don’t get me wrong, Microsoft has some great development environments and tools as does the Java Community. However, there is a limit to what can be effectively done with the development of business applications using current protocols and infrastructures in either case. The Java Community has refined its offerings around scalability from the beginning and has matured in that endeavor. Microsoft on the other hand got off to a good start with its own offerings and could have done much more with these initial environments if they had stuck to refining what were already decent foundations.

However, instead of doing this Microsoft will persist to ensure new offerings in some misguided attempt to remain relevant while making these offerings as bloated as the original ones were claimed to be seeking that multi-pronged “silver bullet” that will not only yield high performing applications but offer efficient development environments that will convince many in the Java Community to convert.

I believe this is a fool’s errand but businesses are often run by people who are perceived to be intelligent but really aren’t. Good developers understand what they need to create good applications and it would behoove Microsoft to stop throwing everything out with the bath water and start again filtering for the community what is necessary, makes sense, and defines what makes for good ease-of-use development environments. This would be a far superior approach than turning their entire section of the industry into a technical nightmare creating a future that is based upon quicksand…

Credits: Gsmarena

Credits: Gsmarena

Google promised that the newest developer preview would be available to flash starting this month and Google was sure able to deliver on that promise. And with great timing as well, tomorrow is when the Google Pixel and Pixel XL are officially going on sale in Verizon and Best Buy Stores in the US with Android 7.1 out of the box.

It looks like Google is taking a more hands-on approach with the way it handles updates for its Nexus devices. We’ve never had developer previews this early on, and for an incremental update, nonetheless. It looks like Google is really committed to delivering the quickest updates to its Pixel devices, and the best way to do that, is by ironing out the kinks with the older Nexus crowd.

The Developer Preview is available to flash for the Nexus 6P, Nexus 5X, and the Pixel C.

To install the developer preview, you can do either of three things:

  • You can sign up for the Android Beta program and you’ll be able to download the developer preview OTA.
  • You can download the entire factory image as a TGZ file and flash the image to a phone by using the flast.bat method.
  • You can download the OTA zip file which will bring your device to the latest preview build.

If you have no idea what any of the above methods mean, we suggest that you stick to the first method: enrolling for the Android Beta program and accepting the update over the air. Just remember, if you want to revert to the most recent public software, your data might be wiped in the process.