Can profiling be used to verify if optimization was successful?



I know that profiling is useful to identify bottlenecks and determining what parts of the code require how much time to execute. The latter isn't always very easy to track in the midst of other paths being executed, so once I decide what I want to optimize it might be problematic to see the improvement in numbers. This is especially true in desktop apps which run constantly and it is difficult to: execute the same path and execute it the same number of times to have reliable comparison.

It won't help me if before optimization the function ran X times and took 500 milliseconds, and after optimization it run Y times and took 400 milliseconds.

In such cases, can I somehow use a profiler to determine improvement or do I have to resolve to other options?


Related to : Can profiling be used to verify if optimization was successful?
[ubuntu] ubuntu 8.04 - k3b burn successful, verify fails
Programming Languages
There has been a problem with K3b doing a verify and the verify fails. When you manually check the validity of the burn, it is ok. So burn ok, verify fails.
This problem appeared in 8.04. 7.10 works as expected.
Problem seen with burning iso's. Assume same will happen with regular projects.
Reading on the internet, this problem has been linked to the kernal, not k3b.
Is there a work-around or has this been fixed. Please provide details on on either
item ( work-around or fix ).
I am running 8.04.2 ubuntu load with kubuntu desktop loaded on top
Here is uname -a to provide kernal/platform details
Linux ubuntu 2.6
[other] Profiling a server
Programming Languages
Hi Guys,
I've got a potential new client with existing linux systems, normally that doesn't bother me but I already know that at least one of the systems "is a bit quirky" and will only run it's bespoke app with specific versions of software running on the box.
I was wondering what every one does when it comes to profiling a server? Before I look to see if I can migrate their app to a new server / platform I want to make sure I know everything there is to know about the current installed server.
SWIG Profiling
Programming Languages
Anybody have any experience profiling applications that use SWIG?
I am looking to profile code that is written in c++ with a Python/SWIG wrapper. If I use gprof, is it enough to just compile the c++ code with -pg or will I have to recompile Python/SWIG/NumPy/etc?
If I use something like oprofile will SWIG obfuscate the profiling process?
Determine if possible successful probe is successful exploit?
Programming Languages

A logwatch report outputted the following message.

A total of 1 possible successful probes were detected (the
following URLs
contain strings that match one or more of a listing of strings that
indicate a possible exploit):

/?_SERVER[DOCUMENT_ROOT]=../../../../../../../../../../../etc/passwd%00
HTTP Response 200

I am aware that this match is based on a predefined list of strings from Logwatch and that it is a possible exploit but I am unsure how to investigate further to be certain it is not one.

  1. Is it enough to just visit this url in the browser and check if there is no private information being outputted or are there other methods/places I need to check?

  2. Does the HTTP response 200 means it reached the /etc/passwd directory?


What does mean during CUDA profiling?
Programming Languages

I have noticed this wierd behaviour during CUDA code profiling using nvprof or nvvp. Instead of the actual values of the counters, it displays an overflow.

For example, I profile my application using

 nvprof --print-gpu-trace --metrics 
warp_execution_efficiency ./CUDA-EC

And the result I am getting is this:

Device           Kernel                      Warp Execution
Efficiency
Tesla K20m (0)   fix_errors1_warp_cop        <OVERFLOW>

Can somebody tell me how to avoid this and fetch actual value? This behaviour also occurs when I use nvvp.


Can profiling be used to verify if optimization was successful?
Programming Languages

I know that profiling is useful to identify bottlenecks and determining what parts of the code require how much time to execute. The latter isn't always very easy to track in the midst of other paths being executed, so once I decide what I want to optimize it might be problematic to see the improvement in numbers. This is especially true in desktop apps which run constantly and it is difficult to: execute the same path and execute it the same number of times to have reliable comparison.

It won't help me if before optimization the function ran X times and took 500 milliseconds, and after optimization it run Y times and took 400 milliseconds.

In such cases, can I somehow use a profiler to determine improvement or do I have to resolve to other options?



Privacy Policy - Copyrights Notice - Feedback - Report Violation - RSS 2017 © bighow.org All Rights Reserved .