As to how prevalent p-value misunderstandings are, a good answer is in that first paper you cite. It looks like your research has answered a lot of your own questions. Which is great.
Appropriate p-values are set by convention, and in my experience, it's pretty well understood that a p-value is not an indicator of some kind of mechanical, scientific certainty - it's just convention. But that is useful in and of itself; if you see an unusually large p-value, or a p-value that doesn't adhere to the minimum standards set by convention, you should be much more wary about how you regard the claims being made by that study, because the larger the p-value, the easier it is to call weaker evidence "statistically significant," which is a good way to swindle people by science-washing your data. In warning against pseudoscientific claims that pretend to have rigorous statistical analysis, that's a very useful caution.
You shouldn't use a p-value all by itself for in-depth information and obviously replication still matters quite a bit. And in point of fact, on the Brain Science Podcast, the latest episode features a cognitive neuroscientist making exactly these statements regarding a variety of Cog Sci research that's been done. Much of it hasn't been replicated, a lot of the methodology features intentional removal of information which creates an over-focusing on localization of cognition and brain function.
But all that said, it is still useful to get a quick handle on a study as well, provided you understand what it means. If you see a study that shows that, assuming the null were true, their results or more extreme ones would only occur 5% or 1% of the time, that's still pretty useful in general terms.