Apart from your failure to recognize convergence, there is a more severe problem with your code: you don't test for 0 or near-0 values when calculating the next step and performing divisions, i. e. in the simple newton method you should test fp for 0 (or near-0) before dividing.
The problem in question is very ill conditioned, meaning that it it is prone to convergence problems due to rounding effects. It's likely the problem was deliberately set up this way to make you aware of these potential problems, so I won't spoil your experience by going into more detail. But I would advise you to do the following:
1. Change your print-out to show the curent value of p at every step of the iteration, even before it has converged. Please do add a linebreak at the end, too - it will help!
2. Along with the current value of p, also print out the value of the function f(p). f(p) should converge to 0, so it's easier to judge the quality of the current guess
3. Fix the loop to exit after convergence, as suggested in solution 1
4. Try out very small tolerance values (try repeatedly with ever shrinking values). Take note how you'll eventually reach a point where the iteration simply doesn't improve, or even gets significantly worse.
P.S.:
There are many ways to control an iterative numerical algorithm such as the newton method:
1. number of steps: this will always work, and is a good way to prevent endless loops or detect convergence issues.
2. difference of function value to target value (0 in this case): you don't check this here - why? That is the central goal here!
3. relative improvement, i. e. how much has the result (obtained from 2.) improved compared to the last step, or couple of last steps?
4. relative step size, i. e. how much (or, rather, how little) is the argument being changed compared to the last step - this is what you test; consider that this step size does not in fact indicate your progress (or lack thereof) towards a better solution! To judge that, see 2. and 3. above!
5. history of steps: when you keep stepping back and forth (as you can see in your results from about iteration 15) without any progress (which you can measure by looking at f(p)), then your algorithm is stuck!
The importance of implementing convergance criteria is pretty much in the order as they are listed here: the first is just a safeguard in case the others don't catch a problem. The second is measuring what you're trying to accomplish. The 3rd and 4th are really only needed to keep the calculations in a range where rounding errors (hopefully) won't influence the algorithm. And the 5th (and possibly other tests) is there to catch the really odd corner cases where your chosen algorithm simply doesn't work as expected.
P.P.S.:
Just to get an impression about things you need to consider when programming a foolproof rootfinder that always delivers a right answer, google for Jack Crenshaws "The Worlds Best Rootfinder" or just look it up on
www.embedded.com[
^]. You'll find a series of about 6 articles spread out over the course of several years where Crenshaw describes how he went to implement a rootfinder that is efficient, but also capable to deal with every conceivable corner case that it might encounter.
It's also very enlightening how, over the time, Crenshaw came to consider specific cases that he didn't think of at first. Even if you have difficulties following the math involved there, I encourage you to take a look!
Funnily, "TWBRF" wouldn't help here, as it's core assumption is a true root, i. e. a function that passes from negative to positive values or vice versa, whereas your example has a local minimum in it's root.