  C RUBY-ON-RAILS MYSQL ASP.NET DEVELOPMENT RUBY .NET LINUX SQL-SERVER REGEX WINDOWS ALGORITHM ECLIPSE VISUAL-STUDIO STRING SVN PERFORMANCE APACHE-FLEX UNIT-TESTING SECURITY LINQ UNIX MATH EMAIL OOP LANGUAGE-AGNOSTIC VB6 MSBUILD # python quicksort using rescursion  » python » python quicksort using rescursion

By : Maarten van Hees
Date : November 22 2020, 04:01 AM
this will help You do not need to add equal numbers to less. Add it to new array and put it in middle of your return statement. Try this: code :
``````def quick_sort(arr):
# if array is empty or has only 1 element
# it means the array is already sorted, so return it.
if len(arr) < 2:
return arr
else:
rand_index = random.randint(0,len(arr)-1)
pivot = arr[rand_index]
less = []
equal_nums = []
greater = []

# create less and greater array comparing with pivot
for i in arr:
if i < pivot:
less.append(i)
if i > pivot:
greater.append(i)
if i == pivot:
equal_nums.append(i)

return quick_sort(less) + equal_nums + quick_sort(greater)
``````
``````def qsort(L):
if L: return qsort([x for x in L if x<L]) + [x for x in L if x==L] + qsort([x for x in L if x>L])
return []
`````` ## Quicksort- how pivot-choosing strategies affect the overall Big-oh behavior of quicksort?

By : user2735254
Date : March 29 2020, 07:55 AM
it should still fix some issue An important fact you should know is that in an array of distinct elements, quicksort with a random choice of partition will run in O(n lg n). There are many good proofs of this, and the one on Wikipedia actually has a pretty good discussion of this. If you're willing to go for a slightly less formal proof that's mostly mathematically sound, the intuition goes as follows. Whenever we pick a pivot, let's say that a "good" pivot is a pivot that gives us at least a 75%/25% split; that is, it's greater than at least 25% of the elements and at most 75% of the elements. We want to bound the number of times that we can get a pivot of this sort before the algorithm terminates. Suppose that we get k splits of this sort and consider the size of the largest subproblem generated this way. It has size at most (3/4)kn, since on each iteration we're getting rid of at least a quarter of the elements. If we consider the specific case where k = log3/4 (1/n) = log4/3 n, then the size of the largest subproblem after k good pivots are chosen will be 1, and the recursion will stop. This means that if we choose get O(lg n) good pivots, the recursion will terminate. But on each iteration, what's the chance of getting such a pivot? Well, if we pick the pivot randomly, then there's a 50% chance that it's in the middle 50% of the elements, and so on expectation we'll choose two random pivots before we get a good pivot. Each step of choosing a pivot takes O(n) time, and so we should spend roughly O(n) time before getting each good pivot. Since we get at most O(lg n) good pivots, the overall runtime is O(n lg n) on expectation.
An important detail in the above discussion is that if you replace the 75%/25% split with any constant split - say, a (100 - k%) / k% split - the over asymptotic analysis is the same. You'll get that quicksort takes, on average, O(n lg n) time. ## Will `quicksort 3way` be slower than `quicksort` in general case?

By : Somaye Barani
Date : March 29 2020, 07:55 AM
I think the issue was by ths following , Give a little thought: you have an algorithm designed to help you solve a worse case scenario for another algorithm. Of course it is not supposed to beat the initial algorithm in the general case. Idea in 3 way quicksort is to improve worst case behavior not the average case one. ## How to know the current level of rescursion?

By : jun huang
Date : March 29 2020, 07:55 AM
this one helps. Keep counter, and it value will show the deepness level. Something like this:
code :
``````int counter = 0;
void call() {
counter++;
// recursive call
call()
counter--;

}
`````` ## Fork/Join framework for quicksort execute longer than normal quicksort

By : ikaz
Date : March 29 2020, 07:55 AM
To fix this issue For small enough datasets:
The cost of synchronizing data between different threads will exceed the benefit of parallelism Several threads will work on ranges of memory so close to each other, that it will be cached in the L1 cache of each core, which means less efficient use of cache (the same data is fetched from memory several times when it was actually in the cache of another core) Bubblesort will actually outperform quicksort because even though O(n^2) > O(n log n), the cost of making recursive calls will exceed the extra O-complexity of bubblesort ## Python Ternary Rescursion

By : kurbaa
Date : March 29 2020, 07:55 AM
like below fixes the issue So the big idea with all base change is the following:
You take a number n written in base b as this 123. That means n in base 10 is equal to 1*b² + 2*b + 3 . So convertion from base b to base 10 is straigtforward: you take all digits and multiply them by the base at the right power.
code :
``````def numToTernary(n):
'''Precondition: integer argument is non-negative.
Returns the string with the ternary representation of non-negative integer
n. If n is 0, the empty string is returned.'''
if n==0:
return ''
if n<3:
return str(n)
return numToTernary(n//3)+str(n%3)

print(numToTernary(10))
Out: '101'
`````` 