These are my links for July 8th through July 29th:
- Should Copyright Of Academic Works Be Abolished? – The conventional rationale for copyright of written works, that copyright is needed to foster their creation, is seemingly of limited applicability to the academic domain. For in
a world without copyright of academic writing, academics would still benefit from publishing in the major way that they do now, namely, from gaining scholarly esteem.
Yet publishers would presumably have to impose fees on authors, because publishers
would not be able to profit from reader charges. If these publication fees would be borne
by academics, their incentives to publish would be reduced. But if the publication fees
would usually be paid by universities or grantors, the motive of academics to publish
would be unlikely to decrease (and could actually increase) – suggesting that ending
academic copyright would be socially desirable in view of the broad benefits of a
copyright-free world… - BBC – Radio 4 In Our Time – Philosophy Archive – You can listen again to all the programmes online. The most recent programmes appear at the top of the page.
- [0907.1579] The Computational Power of Minkowski Spacetime – The Lorentzian length of a timelike curve connecting both endpoints of a classical computation is a function of the path taken through Minkowski spacetime. The associated runtime difference is due to time-dilation: the phenomenon whereby an observer finds that another's physically identical ideal clock has ticked at a different rate than their own clock. Using ideas appearing in the framework of computational complexity theory, time-dilation is quantified as an algorithmic resource by relating relativistic energy to an $n$th order polynomial time reduction at the completion of an observer's journey. These results enable a comparison between the optimal quadratic \emph{Grover speedup} from quantum computing and an $n=2$ speedup using classical computers and relativistic effects. The goal is not to propose a practical model of computation, but to probe the ultimate limits physics places on computation.
- How to choose a statistical test – This book has discussed many different statistical tests. To select the right test, ask yourself two questions: What kind of data have you collected? What is your goal? Then refer to Table 37.1.
- NPWRC :: Statistical Significance Testing – Four basic steps constitute statistical hypothesis testing. First, one develops a null hypothesis about some phenomenon or parameter. This null hypothesis is generally the opposite of the research hypothesis, which is what the investigator truly believes and wants to demonstrate. Research hypotheses may be generated either inductively, from a study of observations already made, or deductively, deriving from theory. Next, data are collected that bear on the issue, typically by an experiment or by sampling. (Null hypotheses often are developed after the data are in hand and have been rummaged through, but that's another topic.)
- Data Mining Techniques – Data Mining is an analytic process designed to explore data (usually large amounts of data – typically business or market related) in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. The ultimate goal of data mining is prediction – and predictive data mining is the most common type of data mining and one that has the most direct business applications.