diff --git a/solutions/system_design/pastebin/README.md b/solutions/system_design/pastebin/README.md index 7f95ab9..3cc242c 100644 --- a/solutions/system_design/pastebin/README.md +++ b/solutions/system_design/pastebin/README.md @@ -95,7 +95,7 @@ An alternative to a relational database acting as a large hash table, we could u * The **Client** sends a create paste request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) * The **Web Server** forwards the request to the **Write API** server -* The **Write API** server does does the following: +* The **Write API** server does the following: * Generates a unique url * Checks if the url is unique by looking at the **SQL Database** for a duplicate * If the url is not unique, it generates another url diff --git a/solutions/system_design/social_graph/README.md b/solutions/system_design/social_graph/README.md index fac4b3f..947956f 100644 --- a/solutions/system_design/social_graph/README.md +++ b/solutions/system_design/social_graph/README.md @@ -104,7 +104,7 @@ We won't be able to fit all users on the same machine, we'll need to [shard](htt * The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) * The **Web Server** forwards the request to the **Search API** server * The **Search API** server forwards the request to the **User Graph Service** -* The **User Graph Service** does does the following: +* The **User Graph Service** does the following: * Uses the **Lookup Service** to find the **Person Server** where the current user's info is stored * Finds the appropriate **Person Server** to retrieve the current user's list of `friend_ids` * Runs a BFS search using the current user as the `source` and the current user's `friend_ids` as the ids for each `adjacent_node` diff --git a/solutions/system_design/web_crawler/README.md b/solutions/system_design/web_crawler/README.md index b84f07e..7876b94 100644 --- a/solutions/system_design/web_crawler/README.md +++ b/solutions/system_design/web_crawler/README.md @@ -213,7 +213,7 @@ We might also choose to support a `Robots.txt` file that gives webmasters contro * The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) * The **Web Server** forwards the request to the **Query API** server -* The **Query API** server does does the following: +* The **Query API** server does the following: * Parses the query * Removes markup * Breaks up the text into terms