Skip to content

Commit

Permalink
Merge branch 'master' into feature/lru-cache
Browse files Browse the repository at this point in the history
  • Loading branch information
amejiarosario committed Mar 30, 2020
2 parents 3e787c6 + 93543c4 commit e8ca8b7
Show file tree
Hide file tree
Showing 20 changed files with 12,340 additions and 3,506 deletions.
4 changes: 4 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,10 @@ jobs: # a collection of steps
- store_test_results: # for display in Test Summary: https://circleci.com/docs/2.0/collect-test-data/
path: test-results.xml

- run:
name: release
command: npm run semantic-release || true

docs:
docker:
- image: circleci/ruby:2.5.3-stretch-node
Expand Down
2 changes: 1 addition & 1 deletion .node-version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
10.12.0
12.16.1
10 changes: 2 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,15 +137,9 @@ If the commit reverts a previous commit, it should begin with `revert: `, follow
### Type
Must be one of the following:
* **feat**: A new feature
* **fix**: A bug fix
* **docs**: Documentation only changes
* **build**: Changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)
* **ci**: Changes to our CI configuration files and scripts (example scopes: Circle, BrowserStack, SauceLabs)
* **test**: Adding missing tests or correcting existing tests
* **refactor**: A code change that neither fixes a bug nor adds a feature
* **style**: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
* **perf**: A code change that improves performance
* **feat**: A new feature
* **chore**: Changes to our CI configuration files and scripts (example scopes: Circle, BrowserStack, SauceLabs)
### Scope
The scope should be the name of the npm package affected (as perceived by the person reading the changelog generated from commit messages.
Expand Down
2 changes: 1 addition & 1 deletion book/content/dedication.asc
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[dedication]
== Dedication

_To my wife Nathalie that supported me in my long hours of writing and my baby girl Abigail._
_To my wife Nathalie who supported me in my long hours of writing and my baby girl Abigail._
20 changes: 10 additions & 10 deletions book/content/part01/algorithms-analysis.asc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ endif::[]

=== Fundamentals of Algorithms Analysis

Probably you are reading this book because you want to write better and faster code.
You are probably reading this book because you want to write better and faster code.
How can you do that? Can you time how long it takes to run a program? Of course, you can!
[big]#⏱#
However, if you run the same program on a smartwatch, cellphone or desktop computer, it will take different times.
Expand All @@ -15,7 +15,7 @@ image::image3.png[image,width=528,height=137]
Wouldn't it be great if we can compare algorithms regardless of the hardware where we run them?
That's what *time complexity* is for!
But, why stop with the running time?
We could also compare the memory "used" by different algorithms, and we called that *space complexity*.
We could also compare the memory "used" by different algorithms, and we call that *space complexity*.

.In this chapter you will learn:
- What’s the best way to measure the performance of your code regardless of what hardware you use.
Expand Down Expand Up @@ -59,16 +59,16 @@ To give you a clearer picture of how different algorithms perform as the input s
|=============================================================================================
|Input size -> |10 |100 |10k |100k |1M
|Finding if a number is odd |< 1 sec. |< 1 sec. |< 1 sec. |< 1 sec. |< 1 sec.
|Sorting elements in array with merge sort |< 1 sec. |< 1 sec. |< 1 sec. |few sec. |20 sec.
|Sorting elements in array with Bubble Sort |< 1 sec. |< 1 sec. |2 minutes |3 hours |12 days
|Finding all subsets of a given set |< 1 sec. |40,170 trillion years |> centillion years |∞ |∞
|Find all permutations of a string |4 sec. |> vigintillion years |> centillion years |∞ |∞
|Sorting array with merge sort |< 1 sec. |< 1 sec. |< 1 sec. |few sec. |20 sec.
|Sorting array with Selection Sort |< 1 sec. |< 1 sec. |2 minutes |3 hours |12 days
|Finding all subsets |< 1 sec. |40,170 trillion years |> centillion years |∞ |∞
|Finding string permutations |4 sec. |> vigintillion years |> centillion years |∞ |∞
|=============================================================================================

Most algorithms are affected by the size of the input (`n`). Let's say you need to arrange numbers in ascending order. Sorting ten items will naturally take less time than sorting out 2 million. But, how much longer? As the input size grow, some algorithms take proportionally more time, we classify them as <<part01-algorithms-analysis#linear, linear>> runtime [or `O(n)`]. Others might take power two longer; we call them <<part01-algorithms-analysis#quadratic, quadratic>> running time [or `O(n^2^)`].

From another perspective, if you keep the input size the same and run different algorithms implementations, you would notice the difference between an efficient algorithm and a slow one. For example, a good sorting algorithm is <<part04-algorithmic-toolbox#merge-sort>>, and an inefficient algorithm for large inputs is <<part04-algorithmic-toolbox#selection-sort>>.
Organizing 1 million elements with merge sort takes 20 seconds while bubble sort takes 12 days, ouch!
Organizing 1 million elements with merge sort takes 20 seconds while selection sort takes 12 days, ouch!
The amazing thing is that both programs are solving the same problem with equal data and hardware; and yet, there's a big difference in time!

After completing this book, you are going to _think algorithmically_.
Expand Down Expand Up @@ -135,7 +135,7 @@ There’s a notation called *Big O*, where `O` refers to the *order of the funct

TIP: Big O = Big Order of a function.

If you have a program which runtime is:
If you have a program that has a runtime of:

_7n^3^ + 3n^2^ + 5_

Expand All @@ -144,7 +144,7 @@ You can express it in Big O notation as _O(n^3^)_. The other terms (_3n^2^ + 5_)
Big O notation, only cares about the “biggest” terms in the time/space complexity. So, it combines what we learn about time and space complexity, asymptotic analysis and adds a worst-case scenario.

.All algorithms have three scenarios:
* Best-case scenario: the most favorable input arrange where the program will take the least amount of operations to complete. E.g., array already sorted is beneficial for some sorting algorithms.
* Best-case scenario: the most favorable input arrangement where the program will take the least amount of operations to complete. E.g., an array that's already sorted is beneficial for some sorting algorithms.
* Average-case scenario: this is the most common case. E.g., array items in random order for a sorting algorithm.
* Worst-case scenario: the inputs are arranged in such a way that causes the program to take the longest to complete. E.g., array items in reversed order for some sorting algorithm will take the longest to run.

Expand All @@ -154,7 +154,7 @@ TIP: Big O only cares about the highest order of the run time function and the w

WARNING: Don't drop terms that are multiplying other terms. _O(n log n)_ is not equivalent to _O(n)_. However, _O(n + log n)_ is.

There are many common notations like polynomial, _O(n^2^)_ like we saw in the `getMin` example; constant _O(1)_ and many more that we are going to explore in the next chapter.
There are many common notations like polynomial, _O(n^2^)_ as we saw in the `getMin` example; constant _O(1)_ and many more that we are going to explore in the next chapter.

Again, time complexity is not a direct measure of how long a program takes to execute, but rather how many operations it performs given the input size. Nevertheless, there’s a relationship between time complexity and clock time as we can see in the following table.
(((Tables, Intro, Input size vs clock time by Big O)))
Expand Down
22 changes: 11 additions & 11 deletions book/content/part01/big-o-examples.asc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ endif::[]

There are many kinds of algorithms. Most of them fall into one of the eight time complexities that we are going to explore in this chapter.

.Eight Running Time complexity You Should Know
.Eight Running Time Complexities You Should Know
- Constant time: _O(1)_
- Logarithmic time: _O(log n)_
- Linear time: _O(n)_
Expand All @@ -17,7 +17,7 @@ There are many kinds of algorithms. Most of them fall into one of the eight time
- Exponential time: _O(2^n^)_
- Factorial time: _O(n!)_

We a going to provide examples for each one of them.
We are going to provide examples for each one of them.

Before we dive in, here’s a plot with all of them.

Expand All @@ -30,7 +30,7 @@ The above chart shows how the running time of an algorithm is related to the amo
==== Constant
(((Constant)))
(((Runtime, Constant)))
Represented as *O(1)*, it means that regardless of the input size the number of operations executed is always the same. Let’s see an example.
Represented as *O(1)*, it means that regardless of the input size, the number of operations executed is always the same. Let’s see an example:

[#constant-example]
===== Finding if an array is empty
Expand All @@ -47,7 +47,7 @@ include::{codedir}/runtimes/01-is-empty.js[tag=isEmpty]

Another more real life example is adding an element to the begining of a <<part02-linear-data-structures#linked-list>>. You can check out the implementation <<part02-linear-data-structures#linked-list-inserting-beginning, here>>.

As you can see, in both examples (array and linked list) if the input is a collection of 10 elements or 10M it would take the same amount of time to execute. You can't get any more performant than this!
As you can see in both examples (array and linked list), if the input is a collection of 10 elements or 10M, it would take the same amount of time to execute. You can't get any more performant than this!

[[logarithmic]]
==== Logarithmic
Expand All @@ -68,7 +68,7 @@ The binary search only works for sorted lists. It starts searching for an elemen
include::{codedir}/runtimes/02-binary-search.js[tag=binarySearchRecursive]
----

This binary search implementation is a recursive algorithm, which means that the function `binarySearch` calls itself multiple times until the solution is found. The binary search splits the array in half every time.
This binary search implementation is a recursive algorithm, which means that the function `binarySearchRecursive` calls itself multiple times until the solution is found. The binary search splits the array in half every time.

Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some tools like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem]. The `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call you are most likely in front of a logarithmic runtime: _O(log n)_.

Expand All @@ -92,8 +92,8 @@ include::{codedir}/runtimes/03-has-duplicates.js[tag=hasDuplicates]

.`hasDuplicates` has multiple scenarios:
* *Best-case scenario*: first two elements are duplicates. It only has to visit two elements.
* *Worst-case scenario*: no duplicated or duplicated are the last two. In either case, it has to visit every item on the array.
* *Average-case scenario*: duplicates are somewhere in the middle of the collection. Only, half of the array will be visited.
* *Worst-case scenario*: no duplicates or duplicates are the last two. In either case, it has to visit every item in the array.
* *Average-case scenario*: duplicates are somewhere in the middle of the collection. Only half of the array will be visited.

As we learned before, the big O cares about the worst-case scenario, where we would have to visit every element on the array. So, we have an *O(n)* runtime.

Expand Down Expand Up @@ -147,19 +147,19 @@ Usually they have double-nested loops, where each one visits all or most element
[[quadratic-example]]
===== Finding duplicates in an array (naïve approach)

If you remember we have solved this problem more efficiently on the <<part01-algorithms-analysis#linear, Linear>> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_:
If you remember, we have solved this problem more efficiently in the <<part01-algorithms-analysis#linear, Linear>> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_:

// image:image12.png[image,width=527,height=389]

.Naïve implementation of has duplicates function
.Naïve implementation of hasDuplicates function
[source, javascript]
----
include::{codedir}/runtimes/05-has-duplicates-naive.js[tag=hasDuplicates]
----

As you can see, we have two nested loops causing the running time to be quadratic. How much difference is there between a linear vs. quadratic algorithm?

Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <<part01-algorithms-analysis#linear, linear solution>> you will get the answer in seconds! [big]#🚀#
Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution, you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <<part01-algorithms-analysis#linear, linear solution>>, you will get the answer in seconds! [big]#🚀#

[[cubic]]
==== Cubic
Expand All @@ -186,7 +186,7 @@ include::{codedir}/runtimes/06-multi-variable-equation-solver.js[tag=findXYZ]

WARNING: This is just an example, there are better ways to solve multi-variable equations.

As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on when we have a runtime in the form of _O(n^c^)_, where _c > 1_, we refer to this as a *polynomial runtime*.
As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on. When we have a runtime in the form of _O(n^c^)_, where _c > 1_, we refer to this as a *polynomial runtime*.

[[exponential]]
==== Exponential
Expand Down
2 changes: 1 addition & 1 deletion book/content/part02/array-vs-list-vs-queue-vs-stack.asc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ In this part of the book, we explored the most used linear data structures such
* You want constant time to remove/add from extremes of the list.

.Use a Queue when:
* You need to access your data in a first-come, first served basis (FIFO).
* You need to access your data on a first-come, first served basis (FIFO).
* You need to implement a <<part03-graph-data-structures#bfs-tree, Breadth-First Search>>

.Use a Stack when:
Expand Down
18 changes: 9 additions & 9 deletions book/content/part02/array.asc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ TIP: Strings are a collection of Unicode characters and most of the array concep

.Fixed vs. Dynamic Size Arrays
****
Some programming languages have fixed size arrays like Java and C++. Fixed size arrays might be a hassle when your collection gets full, and you have to create a new one with a bigger size. For that, those programming languages also have built-in dynamic arrays: we have `vector` in C++ and `ArrayList` in Java. Dynamic programming languages like JavaScript, Ruby, Python use dynamic arrays by default.
Some programming languages have fixed size arrays like Java and C++. Fixed size arrays might be a hassle when your collection gets full, and you have to create a new one with a bigger size. For that, those programming languages also have built-in dynamic arrays: we have `vector` in C++ and `ArrayList` in Java. Dynamic programming languages like JavaScript, Ruby, and Python use dynamic arrays by default.
****

Arrays look like this:
Expand All @@ -29,7 +29,7 @@ Arrays are a sequential collection of elements that can be accessed randomly usi

==== Insertion

Arrays are built-in into most languages. Inserting an element is simple; you can either add them on creation time or after initialization. Below you can find an example for both cases:
Arrays are built-in into most languages. Inserting an element is simple; you can either add them at creation time or after initialization. Below you can find an example for both cases:

.Inserting elements into an array
[source, javascript]
Expand All @@ -44,7 +44,7 @@ array2[100] = 2;
array2 // [empty × 3, 1, empty × 96, 2]
----

Using the index, you can replace whatever value you want. Also, you don't have to add items next to each other. The size of the array will dynamically expand to accommodate the data. You can reference values in whatever index you like index 3 or even 100! In the `array2` we inserted 2 numbers, but the length is 101, and there are 99 empty spaces.
Using the index, you can replace whatever value you want. Also, you don't have to add items next to each other. The size of the array will dynamically expand to accommodate the data. You can reference values at whatever index you like: index 3 or even 100! In `array2`, we inserted 2 numbers but the length is 101 and there are 99 empty spaces.

[source, javascript]
----
Expand Down Expand Up @@ -87,7 +87,7 @@ const array = [2, 5, 1, 9, 6, 7];
array.splice(1, 0, 111); // ↪️ [] <1>
// array: [2, 111, 5, 1, 9, 6, 7]
----
<1> at the position `1`, delete `0` elements and insert `111`.
<1> at position `1`, delete `0` elements and insert `111`.

The Big O for this operation would be *O(n)* since in worst case it would move most of the elements to the right.

Expand Down Expand Up @@ -132,7 +132,7 @@ const array = [2, 5, 1, 9, 6, 7];
array[4]; // ↪️ 6
----

Searching by index takes constant time, *O(1)*, to retrieve values out of the array. If we want to get fancier we can create a function:
Searching by index takes constant time - *O(1)* - to retrieve values out of the array. If we want to get fancier, we can create a function:

// image:image17.png[image,width=528,height=293]

Expand Down Expand Up @@ -184,7 +184,7 @@ We would have to loop through the whole array (worst case) or until we find it:

==== Deletion

Deleting (similar to insertion) there are three possible scenarios, removing at the beginning, middle or end.
There are three possible scenarios for deletion (similar to insertion): removing at the beginning, middle or end.

===== Deleting element from the beginning

Expand Down Expand Up @@ -223,7 +223,7 @@ array.splice(2, 1); // ↪️[2] <1>
----
<1> delete 1 element at position 2

Deleting from the middle might cause most the elements of the array to move back one position to fill in for the eliminated item. Thus, runtime: O(n).
Deleting from the middle might cause most of the elements of the array to move up one position to fill in for the eliminated item. Thus, runtime: O(n).

===== Deleting element from the end

Expand All @@ -237,7 +237,7 @@ array.pop(); // ↪️111
// array: [2, 5, 1, 9]
----

No element other element has been shifted, so it’s an _O(1)_ runtime.
No other element has been shifted, so it’s an _O(1)_ runtime.

.JavaScript built-in `array.pop`
****
Expand All @@ -264,7 +264,7 @@ To sum up, the time complexity of an array is:
(((Runtime, Constant)))
(((Tables, Linear DS, JavaScript Array buit-in operations Complexities)))

.Array Operations timex complexity
.Array Operations time complexity
|===
| Operation | Time Complexity | Usage
| push ^| O(1) | Insert element to the right side.
Expand Down
Loading

0 comments on commit e8ca8b7

Please sign in to comment.