The blog post this week discusses my favorite principle in software development, the open and closed principle. The open and closed principle encourages independencies in software development. It states the following as outlined in the article: Class behavior should be open for extension but closed for modifications. This can be separated into two parts. As explained in the article, the first part is about the behavior of a class. Extensions of behavior allows for changes in behavior to be added, deleted or modified without affecting the other behaviors. This is what is meant to be independent. Thus, the 1st part of the statement signifies changes in behavior which signifies changes in source code. The second part of the statement states that class behavior should be closed for modifications. Is this a contradiction? Modifications means changes to source code, which the second statement encourages developers not to change. This means that class behavior should be independent and not need to be modified when other parts of its behavior are changed. So, when extending a class behaviors, the source codes for other behaviors should not be changed. The other definition introduced in the article is: Software entities(classes, modules, functions, etc.) should be open for extension, but closed for modifications. The explanation for the principle is as I have described previous just above that it encourages independence in software methods.
The article, however, expresses confusion for both definitions because the two sides of the principle is a contradiction, however I think it did a pretty good job in its explanations and examples. As explained in the article, the main idea behind the OCP is that the behavior of the system can change by replacing “entities that constitute it with alternative implementations,” but that the other behaviors can be independent and therefore need not be changed. The example given is that for calculating taxes. As stated, take for example the TaxCalculator interface for calculating taxes. In this system, if we replace the UsTaxCalculator with the UkTaxCalculator, this does not require modifications of existing logic, but that it provides for new implementations to an existing interface. Overall, the main idea is that you can add behaviors to systems be adding new code, but the existing ones do not need to be changed. This creates separability so that system behavior can be easily modified and changed.
Finally, I will close this post with a discussion about 3 rules of thumb suggested in the article that can help a developer to determine when to use the open closed principle. The first is to add extension points to classes that implement requirements that are intrinsically unstable due to instability of the system. This helps to clean the code which makes it easier for modifications of system behavior. The second is to not add more preliminary extension points. Finally, the third is to refactor parts of the code and to introduce additional extension points when the parts are unstable.
Overall I chose this article in order to compare my understanding of the OCP with another software developer. In my opinion, the OCP is based on indenpendence of methods, functions and classes. Although the article indirectly refers to this point, still I think it misses to introduce this point directly.
The last two blog posts on code review have discussed about suggestions for improvements, its overall advantages, and a general overview of what exactly it is, this weeks blog post on code review will focus on the statistics. In general, the post lists 11 proven practices based on experiments and surveys from experienced software developers that has helped team members improve code review abilities. Why is this important? Suggestions from experienced developers are based on experiences, but this post crunches up the numbers in case studies to demonstrate its effectiveness and to suggests 11 proven practices based on the statistics. This is why I chose this article this week in order to emphasize the importance of code reviews.
In my opinion, this article is one of the best I have seen so far for providing suggestions for code reviews and crunching out the numbers as well. As introduced, the suggestions from this post were compiled from code review studies and collections of lessons learned from over 6000 programmers from over 100 companies. The main issue with code review as suggested in the article is efficiency that is reviews often are too long to be practical. The conclusions from the article were from 2500 code reviews, 50 programmers and 3.2 lines of code at Cisco Systems. The study tracked the production teams with members in Bangalore, Budapest, and San Jose for 10 months.
The first suggestion is to review fewer than 200-400 lines of code at a time. The cisco code review study revealed that the optimal range for finding defects is 200-400 lines of code at a time. After that, the ability to find defects decreases. The statistics is that if 10 defects existed, it is likely that the team will find 7 to 9 of them. What is interesting is the graph shown in figure 1 of the defect density against the number of lines of code under review. According to the graph as the number of lines of code under review increases beyond 200, the defect density drops off considerably. So, the optimal effectiveness is from 200-400.
The second suggestion is to aim for inspection rates that are fewer than 300-500 LOC per hour. The inspection rate measures how fast the team is able to review codes. According to figure 2, the effectiveness of inspection falls off when more than 500 lines of codes are under review. Finally, the last interesting point is to never review code for more than 90 minutes. The article suggests that developers take enough time for a proper, slow review, but no more than 90 minutes. It is generally known from many case studies that code reviews after 60 minutes just diminishes in productivity, effort and concentration. Most reviewers simply gets tired and stop finding additional defects. Some will probably not be able to review more than 300-600 lines of codes before their performance drops. So, the rates of finding bugs deteriorates after 60 minutes. Overall, code reviews should not last more than 90 minutes.
I chose this article because the suggestions were too good to pass up a blog post. I find the first 3 suggestions to be helpful to keep in mind when code reviewing as effort always diminishes with time. One of the main concerns with code reviews is efficacy and effectiveness, which always diminishes with time. This article provides great suggestions to find the optimal length of time, length of code to review at a time, and much more to help improve code reviewing skills for inexperienced code reviewers.
This week is another blog post on code review. Unlike the other article which gives suggestions, this article is an overview of code review. So, what is code review? This article defines it as a systematic examination of programming language source codes. It main intent is to find and fix mistakes overlooked in initial development phase. This helps to improves the quality of the software and also helps give critiques on the developer’s skills, which is always an advantage to improving their skills.
Why are code reviews important? We start with some statistics. The article lists that the average rate for detecting defect is 25%, the rate found through functional testing is 35%, and that found through integration testing is 45%. In contrast, the rate of defect detection through code review is 55% to 60%, indicating that code review is one of the most important aspects of code review. The article further goes on to list some important statistics on code review that I thought were extremely interesting. Before code review were introduced into the organization, it is estimated that about 55% of one-line maintenance changes were in error. After its introduction it decreased to 2%. After code reviews were introduced it was estimated that 95% were correct the first time whereas before under 20% were correct the first time. In an experiment, a group of 11 programs were developed by the same group of people. The group were split into 2 groups. The first 5 developed without code reviews. The remaining 6 developed with code reviews. After its release into production, the first 5 had an average of 4.5 errors per 100 lines of code. The other team had an average of .82 errors per 100 lines of code. Reviews were found to cut the errors by over 80%, which demonstrates the importance of code reviews. For this study however, I wonder if for both teams the individuals had equal experiences and skills. Were both sides equally weighted? An interesting statistics is the IBM Orbit project. IBM’s 500,000 line Orbit project used 11 levels of inspections and had only about 1% of errors.
Finally, we close off this blog post with a discussion on the main goals of code review. The article lists the main intent to spot and fix defects early in the development process. The second goal is to share ideas with other team members, so that everyone can share from one another. Sharing ideas then helps to maintain a level of consistency in the design and implementation. Finally, code reviews helps to build confidence in the stakeholder about the technical quality of execution. Code review overall, helps to create more confidence and reliability in the product, since as a team it is easier to catch bugs.
Overall, I choose this article because I strongly believe a good code review is always difficult to conduct. It takes practice to be efficient. This article emphasizes the importance of code review in software engineering, but still it takes time to develop the essential skills to be effective in code reviews. That is why I chose this article this week to emphasize the importance of code reviews and to learn about its main intents and advantages in software engineering.
This week we turn to the adapter design pattern. The adapter design pattern is one of the most useful of the GOF pattern. This article is a good summation of its intent and purpose in software engineering. As stated in the blog post, the adapter design pattern, also known as the Wrapper pattern, helps to bridge a gap between two classes. Similar to the decorator design pattern, it is a structural design pattern. Its main intent is to bridge a connection between 2 classes with different interfaces. It allows the two classes to work together without having to modify the internal code on either side of the classes.
A good analogy provided in the article is that electricity adaptors from different countries. For example the US has rectangular socket while India has cylindrical ones. The main point to get at from the socket example is that the sockets of the visiting country and the plug of the laptop never changes, but that the adapter bridges a gap between them. Altogether, the adapter design pattern makes incompatible interfaces work together, without changing either of the interfaces’ properties.
An interesting aspect of the adapter design pattern is that it can be implemented in 2 different ways. The first uses inheritance and is called the Class Adapter pattern. The second is through composition, which is known as the Object Adapter pattern. Of course, as what we have learned for the strategy design pattern, composition provides for more flexibility and code reusability. Composition is favored over inheritance. So, the Object Adapter pattern is often prefered over the Class Adapter pattern.
The main use of the adaptor design pattern is to independently add responsibilities to individual objects without affecting the other objects. This means that responsibilities can be independently withdrawn just as it can be independently added. Finally, when extension by subclassing is impractical, the adapter pattern can be used.
The main reason for introducing the adaptor design pattern this week is to learn about its useful in software development to incorporate into future designs. The adapter pattern like all of the other design patterns encountered thus far in the course allows for code reusability. It acts as a wrapper to the connected interfaces. It allows what would have been incompatible interfaces to work together and provides for loose coupling between the two. It can also be used for conversion classes. A good example provided is that to do calculations in miles but the library used only expects kilometers. In this case, the adapter class can take miles from the Client and converts it to kilometers. It can then leverage external library methods for all of the calculation. Kilometers can then be converted back to miles and the results can be sent back to the Client. Finally, to close off this week’s blog post my favorite example of the adaptor design pattern is in third-party libraries. The adapter design pattern allows for code reusability and for easy addition and removal of responsibilities. This allows you the flexibility and more control to replace third-party library with better performing API. This is the reason why I chose this topic this week. To learn about the advantages of using the Adapter design pattern to incorporate in my own coding style.
Design patterns are important for building software. It is equally important to know about for core Java interviews. So, it is always good to have a clear understanding of the various design patterns in Java. The decorator design pattern is a prominent core Java design pattern. In addition, it is used in JDK in IO packages. In the IO package it has the decorated Reader and Writer classes for various settings such as for BufferedReader and BufferedWriter. This blog post examines the decorator design pattern.
The decorator design pattern is used to extend or modify the behavior of an instance at runtime. While inheritance is an extension of class methods, for the decorator design pattern you can choose any single object of a class and modify its behaviour, leaving the other instances unmodified.
The article discusses the decorator pattern as being implemented by constructing a wrapper around an object. This is done by extending its behavior. In the decorator design pattern, you start with an interface. This becomes a blueprint for the class that will contain decorators, which can now contain basic functionalities. The decorator design pattern contains an abstract class which contains the aggregate relationship an attribute type of the interface. The constructor of the class assigns the interface instance to the attribute, which becomes the decorator base class. This class can be extended to contain as many concrete decorator classes with its own methods. Finally, the article gives examples of the implementation of the decorator design pattern as an ice-cream with decorative toppings. For the design pattern you have the basic ice-cream and as much toppings as you would like. So, you have ice-cream as an interface. Simple ice-cream and ice-cream decorators as classes and their inheritance are as much toppings as the developer would like.
I chose this post because it is always important to learn more design patterns. Design patterns helps to organize the code and it helps to make instantiations more efficient. In the case of the decorator design pattern the advantages is that it is more flexible than inheritance. The decorator method provides a more flexible alternative to subclassing for functionality and method extensions. The reason for this is that inheritance adds responsibilities at compile time and run-time. Another advantage is that it allows behaviors to be modified at runtime. This means that the behavior can be modified without going back to the existing code and modifying it. In addition, the decorator method provides a solution to permutation issues since components are wrapped by a number of decorators. Altogether, the decorator design pattern adheres to the principle that I have seen in all of the other design pattern thus far and is my most favorite principle in software engineering that is the principle of being open for extension but closed for modifications. So, I chose this blog post to learn more about design patterns in order to extend my knowledge and to incorporate in my own program for code organizations and efficiency. The decorator design pattern is used in the prominent core Java design pattern. It comes up often in core Java interviews. So, this is a design pattern that is important to learn about.
Code review is always my least favorite part of software engineering. Becoming one doesn’t come easy. So, this week we discuss 7 aspects that can help to improve coding review skills. The first is to prioritize the goals of code reviews with the team. It is suggested that all members should know about the primary goals of the project. So, a team should set up a time to meet to discuss the primary goals. The writer’s favorite part of code reviews are the following:
To help learn to negotiate with co-workers, to think like them and adapt to similar coding convention so that the writer can easily navigate and change the code.
To spread knowledge about files and features, and what has changed recently, in case there is a bug in the code, 2 people can help in diagnosing and fixing the problem. No one should have one view of their own code.
The writer’s least favorite part of code review are the following:
Catching bugs. Automated tests and using apps are good ways to see how code actually runs.
Enforcing basic style rules.
My most favorite part of code reviews are the sharing of knowledge and codes with team members. Code sharing allows for a broader view to help improve coding skills. My least favorite part is catching and debugging bugs.
The second recommendation to improve code review skills are to run the app and try playing with the features. Reading code is not the same as interacting with the code and it is unlikely that a programmer will catch most of the bugs from reading codes. Finding bugs through reading takes years of practice. It is better to run tests on the code and to interact with it rather than using your head. Chances are you will catch important cases that would have otherwise be missed.
The third method is visualizing method call hierarchies. This involves drawing and visualizing which methods call which other methods or which objects use which other objects. The key is to quiz yourself. Plain reading is not as effective as committing it to memory and writing it down.
The fourth way to improve code review skills is to do the code reviews as soon as you see the request. Even in large reviews, coders should try to make the first pass as soon as possible. However, code reviews are not always an easy task to do immediately. There may be barriers such as the code has been changed many times. Below are some tips to help speed along the progress:
Set a time limit, about half an hour. Spend the hour mapping out the changes and writing down questions. If not ready, then schedule and commit to a time when you can make more detailed pass and approve or request changes.
The final tip which I thought was very helpful and important when designing software is to keep 2 separate repositories on the machine, one for your own changes and one for changes you are reviewing. This allows your changes to stay in place so that compiling changes made by co-workers won’t destroy it.
I chose this blog post to compare it with my own experiences with software development and code reviews. I agree with the final comment about having 2 separate repositories as it would be easier to recover back files of packages that might have been disrupted by changes with other co-workers or with changes you made yourself.
This week we turn to the best testing practices in JUnit. This article lists out some of the most effective testing practices in JUnit that I hope to incorporate in my own testing practices. This is the main reason why I chose this blog post this week. Testing has two issues. First there are a lot of unit tests to conduct. So, testing needs to be lightning fast. Second, tests are used to indicate problems in the production code. Testing should fail if and only if the production code is broken. Therefore, testing needs to be extremely reliable.
This article discusses unit tests that needs to be ran completely in memory. This means that the unit tests should not make HTTP requests, access a database, or read from a file system. These types of tests could take up too much time or are too unreliable to proceed with unit testing. Instead, they should be left to other types of tests such as functional tests. In addition, filesystems are too complicated to be considered for unit testing. The article lists the following complications:
Filesystems need to acknowledge the location of the current working directory. This can be difficult on developer machines and build machines. Many times, they require that the files to be read be stored in source control. However, it can be difficult to keep the items in the source control up to date.
My favorite recommendation above all in computer science is to not use static members in a test class. Static members creates dependencies in unit testing. For example, if a class depended on the DatabaseUtils.createConnection() method, then that method and whatever is dependent upon it would almost be impossible to test. Instead, it would require having a database or a testing flag in the DatabaseUtils to be able to test the function. Another issue is that static methods creates behaviors that applies to all classes. In order to alter its behavior, a flag must be passed in as a parameter to the method, or the flag must be set as static. The main issue with passing flags in as parameters is that it changes the signature for every caller. This becomes cumbersome as more and more flags are added. The problem with the second approach is that the code can be all over the place. Therefore, use of static members in test class are impractical. This practice to me, although I have not yet encountered with static members in test cases, is the most interesting topic to discuss about and that is why it is my most favorite.
The last recommendation discussed in the article is to not skip unit tests. A few ways to skip unit tests, but should be avoided, are to not use JUnit’s @Ignore annotation, to not use Mavel’s maven.test.skip property, to not use Maven Surefire Plugin’s skipTests property, and finally to not use Maven Surefire Plugin’s excludes property. Skipped unit tests provides no benefits and has to be checked out of source control and compiled. So, instead of skipping unit tests, they should be removed from source control.
Overall, I chose this blog post for an extension of experiences in junit testing since the lecture slides were inadequate. I chose it to have more academic experiences with JUnit testing.