Welcome to the Hacktoberfest 2023 Data Structures and Algorithms project! This repository is open for contributions as part of the Hacktoberfest celebration. If you're passionate about algorithms, data structures, and programming in C++ or Python, this is the right place for you.
- Fork this repository.
- Clone your forked repository to your local machine:
https://github.com/Einsteinia11/Data-Structures-and-Algorithm-with-Design-and-Analysis
In this repository as you can guess from the title itself it will contain all the Data structure Algorithms and their Design and Analysis. So if you can contribute❤ it would be great!🤩
It is just a mathematical way of analysing how long an algorithm with a given number of inputs (n) will take to complete it's task. It's usually defined using Big-O-notation. Now Big-O-Notation is a way of converting the overall steps of an algorithm term, then excluding lower order constants and coefficients that don't have that big impact on the orverall complexity of the problem.Below are some common time complexities with simple definitions. Feel free to check out Wikipedia, though, for more in-depth definitions.- O(1) — Constant Time: Given an input of size n, it only takes a single step for the algorithm to accomplish the task. It only takes a single step for the algorithm to accomplish the task.
- O(log n) — Logarithmic time: given an input of size n, the number of steps it takes to accomplish the task are decreased by some factor with each step. The number of steps it takes to accomplish a task are decreased by some factor with each step.
- O(n) — Linear Time: Given an input of size n, the number of of steps required is directly related (1 to 1)
- O(n²) — Quadratic Time: Given an input of size n, the number of steps it takes to accomplish a task is square of n.
- O(C^n) — Exponential Time: Given an input of size n, the number of steps it takes to accomplish a task is a constant to the n power (pretty large number).
Big O notation is the language we use for talking about how long an algorithm takes to run. It's how we compare the efficiency of different approaches to a problem.
It's like math except it's an awesome, not-boring kind of math where you get to wave your hands through the details and just focus on what's basically happening.
With big O notation we express the runtime in terms of—brace yourself—how quickly it grows relative to the input, as the input gets arbitrarily large.
Let's break that down:
- how quickly the runtime grows—It's hard to pin down the exact runtime of an algorithm. It depends on the speed of the processor, what else the computer is running, etc. So instead of talking about the runtime directly, we use big O notation to talk about how quickly the runtime grows.
- relative to the input—If we were measuring our runtime directly, we could express our speed in seconds. Since we're measuring how quickly our runtime grows, we need to express our speed in terms of...something else. With Big O notation, we use the size of the input, which we call "n". So we can say things like the runtime grows "on the order of the size of the input" O(n) or "on the order of the square of the size of the input" or O(n²)
- as the input gets arbitrarily large—Our algorithm may have steps that seem expensive when n is small but are eclipsed eventually by other steps as n gets huge. For big O analysis, we care most about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as n gets very large. (If you know what an asymptote is, you might see why "big O analysis" is sometimes called "asymptotic analysis.")
And here are some helpful resources to learn more:
Wikipedia
The Big O Cheat Sheet is a great resource with common algorithmic time complexities and a graphical representation. Check it out!