The difference between R and Excel

By | ai, bigdata, machinelearning | No Comments

(This article was first published on Revolutions, and kindly contributed to R-bloggers)

If you're an Excel user (or any other spreadsheet, really), adapting to learn R can be hard. As this blog post by Gordon Shotwell explains, one of the reasons is that simple things can be harder to do in R than Excel. But it's worth perservering, because complex things can be easier.

R-excel-graph

While Excel (ahem) excels at things like arithmetic and tabulations (and some complex things too, like presentation), R's programmatic focus introduces concepts like data structures, iteration, and functions. Once you've made the investment in learning R, these abstractions make reducing complex tasks into discrete steps possible, and automating repeated similar tasks much easier. For the full, compelling argument, follow the link to the Yhat blog, below.

Yhat blog: R for Excel Users

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…




Source link

How to apply design thinking in your organization

By | ai, bigdata, machinelearning | No Comments

Design thinking helps organizations grow, innovate, and improve financial performance.

Design thinking is a process that uses design principles for solving complex problems. It helps organizations identify opportunities, unlock innovation, and improve their businesses.

Market leaders as varied as Apple, IBM, Intuit, Kaiser Permanente, and Nike have used design thinking to gain a competitive advantage, applying it to create innovative products and services. Within an organization, design thinking is a tool for unlocking cultural change. It makes companies more flexible, more responsive to their customers, and ultimately, more successful.

What are the elements of design thinking?

Although the name and number of its key principles may vary depending on how you apply them, the basic elements of design thinking always include some variation on the following: researching and defining the problem, ideating, and prototyping and iterating.

Researching and defining the problem: Design thinking draws upon user-centered research techniques, including ethnographic analysis, for understanding customers and users. During the research phase of the design thinking process, the goal is to understand and empathize with the people for whom you’re designing.

Ideating: During the ideation phase of the design thinking process, the goal is to generate a large number of interesting ideas that represent potential solutions. Techniques for ideation may include sketching, brainstorming, and mind mapping to create high-level concepts.

Prototyping and iterating: Making ideas tangible is critical to the design thinking process, as are the iteration cycles required to test and refine those ideas. Design has a bias toward making things, and prototyping is the technique that pushes the making process forward. You’ll create prototypes for demonstrating and validating your basic designs that are based on the best concepts from your ideation exercises. To properly evaluate a design concept, you’ll want to prototype it in the same environment and context in which it will eventually function. Prototypes can be low or high fidelity, interactive or static. What matters is that the prototypes must convey the experience flow.

Learn more about the elements of design thinking:

What are the benefits of applying design thinking?

By introducing different ways of problem solving and methods for discovering what people truly need, design thinking helps organizations change their cultures to become more customer centric and collaborative. While every company is different, useful metrics for assessing the impact of design thinking include: cultural measures, such as employee satisfaction, internal engagement, and efficiency; financial measures, such as sales and productivity; and product quality measures, such as customer satisfaction.

What is the value of design thinking? While the financials alone may paint an incomplete picture, it’s notable nonetheless that design-led companies, such as Apple, IBM, and Nike, outperform the overall market by a significant margin. According to the Design Management Institute’s Design Value Index Study, these companies beat the S&P 500 by 211% over a 10-year period.

Design thinking can enable better decision-making around the creation of products and services. Travis Lowdermilk, a senior UX designer at Microsoft, describes how his team used design thinking to bring a new perspective to the company’s Cloud and Enterprise division:

It’s definitely about listening to our customers. We’re doing a lot of user-centered design now, as every company is. … Understanding people and understanding their problems is a … core component to that, but then also having this rich, ongoing conversation with them to get as sharp of a picture as to the nuances of those problems, so that we can deliver a solution that’s … right on point.

Discover more about the benefits of design thinking:

How do you apply design thinking?

Transforming into a design-centric company can be a long journey, but a necessary one if you want innovation to thrive within your organization.

How do you make that happen? Suzanne Pellican, vice president of experience design at Intuit, describes Intuit’s eight-year journey to become a design-focused company:

Because we invested in building innovation skills into our employee base, we are not only a design-thinking company—we’re a design-driven company. Meaning, we’re going from creating a culture of design thinking to building a practice of design doing, where we relentlessly focus on nailing the end-to-end customer experience. This means that before anything gets built, the whole team—engineers, designers, marketers, product managers—are interfacing with the customers to ensure they understand the problem well, and together, they design the best solution.

Such culture change, of course, is never easy. In KPCB’s “Design in Tech Report 2016,” John Maeda outlines five factors that contribute to the perception of great design in companies: talent, investment, executive / board support, innovation, and strategy.

A design-centric culture begins by having a strategic intent and broad commitment from an organization’s senior leadership, including at the executive and board levels. In some ways, this is the most essential element, having leaders throughout a company that value design. Of course, with leadership on board, you’ll need talented designers to execute and drive the vision forward. And there’s no doubt that finding talent can be a challenge, whether you’re a startup or an enterprise.

Next, making a long-term investment in design infrastructure, training, and support across the organization is critical. If design thinking does not permeate an organization, it is all too easy for things to revert to business as usual, with everyone hidden away in their own silos instead of collaborating. To make design a driving force within a company, everyone—from executive leadership to engineering, marketing to sales—should receive training and coaching in design thinking, whether it be in-person workshops or online courses.

Successfully calibrating design thinking to your company’s needs is a great challenge in itself. But if you can do it, design will become a core competency that differentiates your organization and its products and services, with rewards that can be truly great.

Bring design thinking into your organization with these resources:

Continue reading How to apply design thinking in your organization.




Source link

what-is-federated-sso-is-it-different-from-sso.jpg

What is Federated SSO and How is it Different from SSO?

By | iot | No Comments


Federated SSO and SSO may look similar to many people. Cannot blame them as users are only able to see the upper crust of the processes. They need to login with their credentials and enjoys different applications or multiple systems without even repeating the login process.” It’s a snap! But originally, both techniques work differently. So, do you want to know how federated SSO is different from SSO? And if you are perplexed about federated SSO or your organization is struggling to make their choice between federated SSO and SSO? The article will help to offer you an insight about federated SSO and state that how SSO and federated SSO are on a quite different page. Please read on.

What is federated SSO?

To understand, federated SSO, you need to understand federation. Federation is a relationship which is maintained between organizations. User from each organization gets access across each other’s web properties. Hence, federated SSO provides an authentication token to the user which is trusted across organizations. So, user does not need to create a different account for every organization in federation to access web properties and applications.

Note:-  use of SAML is common in federation protocols.

How does Federated SSO work?

Let us start with …

Read More on Datafloq

Source link

How to leverage the power of prescriptive analytics to maximize the ROI

By | ai, bigdata, machinelearning | No Comments

Prescriptive analytics (optimization) is a sophisticated analytics technology. It can deliver great business value by helping decision makers handle the tough trade-offs that arise when limited resources force choices among options. Optimization was traditionally applied by Operations Research professionals to solve operational problems, such as route optimization and logistics planning. With the advent of new technologies that make it possible to model larger, enterprise-wide problems, and provide broad support for what-if analyses, Prescriptive analytics now enables a new class of business analytics applications.




Source link

Les sondages avaient tout faux en 2016?

By | ai, bigdata, machinelearning | No Comments

Allez, un dernier billet sur les sondages, d’autant plus que le débat de demain soir devrait porter (plus spécifiquement) sur les faillites sur sondages anglais et américains, de 2016.

  • Alors comme ça les sondages seraient “faux” ?

Je n’aime pas cette terminologie. J’ai l’impression que la confusion vient en grande partie du fait qu’on analyse, au final, un vote aux élection comme un référendum. C’est pour ça qu’on a tendance à mettre dans le même parnier le référendum britannique et les élections américaines pour parler de la “faillite des sondages“. Un point amusant est que si classiquement, on essaye de quantifier le nombre de voix obtenues, aujourd’hui, on communique davantage sur la “probabilité de gagner”. Un peu comme en météo quand on parle de la probabilité qu’il pleuve (PDP, qui est un concept qui me dépasse complètement, comme j’en ai parlé voilà plusieurs années sur ce blog).

Mais revenons sur ce qui s’est passé, un instant. La veille du vote américain, on nous indiquait que la probabilité que Donald Trump gagne était de l’ordre de 15%. Ce qui est la probabilité de tomber sur 6 en lançant un dé. Ou de tomber sur un somme faisant 6 en lançant deux dés. Comment dire… Je me souviens de la réaction de mon fils, il y a plusieurs années, qui s’énervait lorsqu’on jouait à Serpent et Echelle (ou au jeu de l’oie, ou aux petits chevaux) et que, par hasard, je faisais un 6, et que c’est effectivement ce dont j’avais besoin pour gagner. “Tu as triché papa !”. Ai-je besoin de faire un billet de blog pour expliquer qu’avoir un 6 en lançant un dé, “ça arrive” ? Comme je le disais dans un billet l’autre jour, on retrouve ici des débats classiques en gestion des risques. Si je prédis le décès de ma belle-mère avec 1 chance sur 1000 ou même 10000, et qu’elle décède, avais-je raison, ou pas ? C’est la multitude des prévisions qui permet de juger de la qualité de la prévision. Si dans une population importante, la proportion de décès est de 1 sur 1000, alors mon estimation était bonne. Mais les élections, on ne les observe qu’une fois…

Continuons un instant notre réflexion… Ce matin, en partant, la météo m’annonçait une “probabilité de pluie de 15%”. Que dois-je faire de cette information ? Dois-je arrêter de la consulter, le matin, car elle s’est trompé puisqu’il a plu. S’est-elle d’ailleurs trompée ? Et ai-je d’ailleurs le droit de comparer la météo et un vote ? Si on y réfléchi un peu non. Si la météo m’indique une “probabilité de pluie de 15%”, je peux légitimement décider de remettre mon projet de faire en longue balade en vélo toute la journée à plus tard. Et ma décision n’aura pas d’impact sur le fait qu’il pleuve ou pas (sauf si, comme moi, vous avez observé que justement, il pleut précisément les jours où vous n’avez ni parapluie ni manteau imperméable…). Mais pour les votes ? Si le jour du vote on m’annonce que mon candidat a 15% de chances de gagner, et que je décide de ne pas sortir pour aller voter, ça n’est pas la même chose ! Le fait qu’il gagne, ou pas, dépend en grande partie de mon action ! Les gens peuvent changer d’avis, et prendre des décisions en fonction de ce qu’on leur annonce. Et ça aura un impact sur l’élection (si beaucoup de monde raisonne ainsi). Alors que pour la pluie….

  • Une “probabilité de gagner” ?

Prévoir l’avenir c’est dur. Ce que j’aime bien sur certains sites (je pense à 538) c’est par exemple la nuance qui est faite entre les forecast (prévision pour un scrutin à venir, dans plusieurs semaines) et les “now-cast“, correspondant à une projection de ce qui se passerait lors d’une hypothétique élection qui se tiendrait lors du sondage. Mine de rien, c’est une nuance assez fondamentale, je pense. Maintenant, un point très important est, me semble-t-il, l’importance donnée à cette “probabilité de gagner”.

Dans mon prédédant billet, je faisais quelques simulations pour montrer que la probabilité de gagner est un concept difficile à comprendre, car il reflète essentiellement une incertitude. Au lieu de faire des simulations, faisons plutôt quelques calculs. Dans le cas du référendum anglais (histoire de simplifier un peu l’interprétation), supposons qu’un sondage donne “Remain” à 52%. Si on veut un intervalle de confiance à 95%, on a (en gros) un intervalle de ±3 points, soit [49%,55%]. Dans la terminologie statistique, on aura une loi normale, centrée sur 52%, avec un écart-type de 1.5%. La probabilité d’être au dessus de 50% est ici de 91%

> 1-pnorm(50,52,1.5)
[1] 0.9087888

ce qui donne une cote à 10 contre 1

> (1-pnorm(50,52,1.5))/pnorm(50,52,1.5)
[1] 9.963564

Si on avait obtenu 51%, c’est à dire un intervalle de confiance [48%,54%], on aurait une côte à 3 contre 1.

> (1-pnorm(50,51,1.5))/pnorm(50,51,1.5)
[1] 2.960513

Autrement dit, si entre deux jours consécutifs, les sondages passent de 51% à 52% (un hausse de un point), la côté passe de 3 contre 1 à 10 contre 1 ! (je parle ici de côte car beaucoup ont utilisé les paris en ligne pour prévoir la victoire de tel ou tel camp). Autrement dit, les paris (ou les probabilités qu’un camp l’emporte) sont très très sensibles à de très faibles variation dans le modèle. Et ceci n’est pas sans poser de problème quand on sait comment fonctionne le lissage des sondages (très fort au début puis de plus en plus faible au fur et à mesure que le jour de l’élection approche): la probabilité de gagner devient incroyablement volatilé en fin de course, pouvant passer de 5 contre 1 pour un candidat à 5 contre 1 pour l’autre, si deux sondages varient de 3 points (51.5% pour le premier, 48.5% pour le second).

Historiquement, les sondages donnaient une proportion, genre 52%. Puis il y a quelques années, l’intervalle de confiance est apparu. Et là, j’ai l’impression que les journalistes ont du comprendre qu’ils y perdaient. Par ce que dire que la proportion est entre 49% et 55% (avec une probabilité de 95%), ça n’est pas super sexy. La parade semble être de parler de “la probabilité de gagner” qui permet d’avoir des chiffres très variables, très grands ou très petits. Bref, des chiffres télégéniques, presque clairs (“probabilité de gagner”, tout le monde comprend, hein ?), et souvent énorme, par exemple ici 10 contre 1 (ou 91% dans une terminologie plus probabiliste). Mais si on essayait de mettre un intervalle de confiance sur cette probabilité de gagner ? A la louche, quand on a une fréquence de l’ordre de 52%, la probabilité de gagner est de 91%, mais l’intervalle de confiance est entre 25% et 100%

> 1-pnorm(50,52+c(-2,+2)*1.5,1.5)
[1] 0.2524925 0.9995709

Avec une fréquence de 50.5% obtenue lors d’un sondage, j’aurais un intervalle de confiance à 95% entre 5% et 99%

> 1-pnorm(50,50.5+c(-2,+2)*1.5,1.5)
[1] 0.04779035 0.99018467

Bref, cette probabilité de gagner est assez incertaine… non ? Dit autrement, cette histoire de “probabilité de gagner” est une belle arnaque…

  • Que s’est-il passé en Angleterre ?

Oui, les sondages se sont trompé, dans le sens où il y a eu une différence (relativement) importante entre les résultats de l’élection et des sondages. Dans le premier cas, “leave” à gagner alors que les sondages semblaient avoir tendant à croire que “remain” aurait une majorité. Bref, globablement, on prévoit 0 et on obtient 1. C’est une grosse erreur. Maintenant, si on regarde un peu en détails, comme toujours il peut y avoir plusieurs explications à cette différences entre des sondages et les résultats finaux, comme le notait par exemple Andrew Gelman

  1. les personnes qui ont répondu aux sondages n’étaient pas un “échantillon représentatif” des électeurs (et peu importe la raison au fond, mais si les deux populations diffèrent, il ne faut pas être surpris d’avoir une différence)
  2. les réponses aux sondages permettent mal de mesurer les intensions de votes (par exemple si le taux d’indécision est trop fort, et non aléatoire)
  3. la décision a été prise le dernier jour
  4. la participation électorale a été différente de celle estimée (par exemple des personnes qui voulaient voter “remain” qui se sont abstenus, persuadé de la victoire de leur camp)
  5. la faute à pas de chance, ce qu’on appelle en statistique l’incertitude d’échantillonage.

J’avais fait un billet sur ce dernier point, rappelant qu’il est beaucoup plus important que ce qu’on apprend souvent en statistique, justement à cause des autres effets. Et c’est probablement ce qui s’est passé en Angleterre, un mélange de toutes ces causes. Ah si, peut être juste un point que je pourrais mentionner ici : le type de sondage utilisé a eu un impact considérable, les sondages par téléphone mettant le “remain” bien plus haut que les sondages en ligne.

  • Que s’est-il passé aux Etats-Unis ?

Comme je l’ai déjà dit, prévoir le vote aux Etats-Unis est compliqué, compte tenu du mode de scrutin. Mais au moins, les modèles utilisés sont clairement décrits (on pourra lire le guide publié par 538 en juin 2016). La modélisation permettant de faire une prévision se fait (en gros) en quatre temps,

  1. Collecter les sondages, pondérer et agréger. Par exemple, un sondage local (par état) est très différent d’un sondage national. Et un sondage sur toute la population n’est pas la même chose qu’un sondage sur des électeurs probables.
  2. Ajuster les sondages. En particulier, il peut être intéressant de lisser les résultats pour un même institut de sondage, en faisant des moyennes sur les sondages précédants. La subtilité est de faire varier le paramètre de lissage, avec des poids très conservateurs très en amont de l’élection (on lisse beaucoup, avec peu de variabilité entre deux dates) et au contraire, des poids plus faibles pour le passé au fur et à mesure que l’on se rapproche du jour de l’élection (on lisse peu, et les résultats deviennent très volatiles, très incertains)
  3. Combiner avec des données démographiques et économiques. Il est important de tenir compte de variables démographiques, aux Etats Unis, pour comprendre les changements possibles entre les élections de 2016 et celles de 2012. Par exemple en tenant compte de concentration éthniques, ou de changement d’âge moyen dans une zone géographique. Ensuite, il est possible de tenir compte de variables économiques, en particulier des niveaux d’emploi dans la région, le revenu personnel, des niveaux de consommation (personnels), ainsi que des niveaux d’indices boursiers (comme le note 538). On peut aussi tenir compte d’une information importante qui est le nombre de jours avant l’élection. Lauderdale & Linzer (2014) donne pas mal d’information sur les variables prédictives, justement.
  4. Projeter, par simulations. En simulant des scénarios, de manière corrélée entre les états, on peut arriver à des scénarios que l’on peut ensuite agréger, pour avoir des résultats nationaux.

Ce genre de modèle s’est relativement bien comporté, dans une marge d’erreur raisonnable. Mais le fait est que quelques pourcentages d’erreur ont donné des résultats très différents de ceux “attendus”. Ce qui est intéressant, c’est qu’avec 48% des voix en 2012, Mitt Romney a perdu l’élection présidentielle. Alors que Donald Trump l’a emporté, avec lui aussi près de 48% des voix. Mais Donald Trump a eu un résultat plus important que celui annoncé par les sondages dans des états clés (swing states). Un autre point important a été le fort changement de vote, jusqu’au bout. Plus de 15% des gens ont admis avoir décidé du candidat pour lequel ils voteraient au cours de la dernière semaine. D’autres sondeurs ont noté des fortes évolutions dans les non-réponses: beaucoup de personnes qui ont finalement voté pour Donald Trump ont longtemps refusé de l’avouer, et figuraient comme “non réponse”. Enfin, et c’est je pense un point important (j’avais tenté un billet sur le sujet voilà quelques années), les élections américaines ne sont pas des élections à deux candidats. Et mal prendre en compte Gary Johnson a probablement induit des erreurs dans les sondages. S’il était connu que Gary Johnson et Donald Trump chassaient le même électorat, beaucoup ont surestimé son score (estimé à 5% au niveau national, mais finalement à 3%), la différence s’étant probablement reporté sur Donald Trump

Après, il faut constater qu’il est difficile de construire un modèle, basé sur la rationalité des gens pour prévoir un comportement électoral. Quand 538 propose d’intégrer un indice économique, c’est parce qu’une hausse soudaine du chômage devrait avoir un impact sur le vote de certaines personnes (reste à savoir lequel, mais le choix des variables explicatives est un exercice en soi, comme le montre l’explosion des modèles sparse, permettant de trouver les bonnes variables prédictives, et non pas de rajouter davantage de bruit). Mais comment comprendre que Donald Trump, candidat ayant obtenu l’approbation du Ku Klux Klan, ait obtenu plus de voix que Mitt Romney en 2012 auprès des électeurs noirs, hispaniques et asiatiques ? Peut-on encore penser que les électeurs sont rationnels ?


Source link

Would Trump’s ‘Blue Lives Matter’ Effort Really Help Protect Police?

By | ai, bigdata, machinelearning | No Comments

President Trump earlier this month announced a new effort to impose harsh penalties on people who hurt or kill police officers. But even some former officers say the policy isn’t likely to have much of an effect.

During the presidential campaign, Trump promised to mandate the death penalty for cop killers. His executive order, signed Feb. 9, doesn’t go quite that far, but it does order the Justice Department to explore new legislation to make attacks on police officers a federal crime and, in the meantime, to use its existing authority to prosecute such crimes, which are now usually tried at the state level. It also calls on the Justice Department to consider recommending that new mandatory minimum sentences be established for people convicted of violence against police officers.

The order comes at a time when policing has become a highly politicized issue. Officers have said their jobs are more difficult after high-profile killings by police in cities such as Ferguson, Missouri. High-profile attacks on police officers, including ambushes in Dallas and Baton Rouge, Louisiana, made 2016 the deadliest year in decades for officers killed in targeted assaults, according to a report from the National Law Enforcement Officers Memorial Fund, a nonprofit group dedicated to honoring fallen officers. In response, some states have passed or are considering so-called Blue Lives Matter legislation that backers say would protect police. The Fraternal Order of Police, an organization representing over 330,000 law enforcement officers, praised Trump’s order and said federal action is necessary.

“Police officers are being attacked simply because they are police officers,” said Jim Pasco, a senior adviser to the group’s president. “This is not just legislation to make cops feel better about themselves; it’s about deterrent and punitive legislation.”

But despite an uptick in overall officer deaths last year, the total number of law-enforcement deaths has generally declined over the longer term. In 2015, 123 officers died in the line of duty, compared with 184 in 1995, according to data from the memorial fund. Those totals include accidents and work-related illnesses (such as on-duty heart attacks). According to data from the FBI, 41 officers were intentionally killed while on duty in 2015, fewer than in the 1990s.

FBI data on nonfatal assaults shows that they don’t appear to be rising either: There were 50,212 officers assaulted in 2015, down from 56,686 in 1995.

The FOP say that the number of reported assaults and killings of police officers by the FBI is too low and that data from nonprofit groups such as the Officer Down Memorial Page could be more reliable. The Officer Down estimate of killings, however, is only modestly higher than official FBI counts in recent years and shows a similar downward trend.

casteel-police-deaths-1

Ambush or targeted attacks similar to those in Dallas or Baton Rouge typically make up a small portion of deaths and assaults. Of the 50,212 officers assaulted in 2015, about 28 percent sustained injuries, according to the FBI. And only 0.5 percent, or 240 of the 50,212, were assaulted in ambush situations or targeted attacks. Most assaults occur in circumstances in which officers are doing their jobs such as during disturbance calls or while attempting an arrest.

SITUATION NUMBER
Disturbance call 16,256
Attempting arrest 7,820
Other 7,509
Handling prisoner 6,143
Investigating suspicious person 4,647
Traffic pursuit/stop 3,972
Handling person with mental illness 1,710
Pursuing burglary suspect 840
Civil disorder 677
Pursuing robbery suspect 398
Ambush situation 240
Police officers assaulted in the U.S. by situation, 2015

Source: FBI

The White House issued a statement on the order saying that it will ensure that anyone who tries to harm an officer will be “aggressively prosecuted.” But experts aren’t sure what federal enforcement of these crimes would accomplish that state governments aren’t already doing. Assaulting a federal officer — an FBI agent, for example — is already illegal, and the majority of states have strict penalties that are heavily enforced. For example, in New Hampshire, killing a police officer is already a capital offense. In New York, assaulting an officer is a class C felony carrying a sentence of up to 15 years.

“There is a pile of protection for police officers now; even simple assault elevates into an aggravated assault,” said Jon Shane, a professor of policing policy at John Jay College of Criminal Justice in New York and a former officer. “Most crimes occur at a state level and are prosecuted at a state level. What is this legislation going to do at the federal level that hasn’t been done at the local level?”

Some law-enforcement groups think stricter laws are needed. States such as Kentucky and Louisiana have passed or are considering legislation to make violence against police a hate crime, which would carry harsher penalties. (Civil rights groups have criticized the bills for going too far.) Police groups, including the Fraternal Order of Police, have called for similar legislation at the federal level.

Critics of Trump’s order worry that it could lead to crackdowns on legitimate forms of protest against the police. Dennis Kenney, a professor at John Jay College of Criminal Justice and a former police officer, said “tough on crime” initiatives can damage trust between departments and communities.

“It will have a chilling effect,” Kenney said. “That would drive a much deeper wedge between police and communities where they’re already struggling.”

CAUSE DEATHS
Traffic-related 50.2
Shot 49.2
Job-related illness 18.8
Fall 3.2
Drowned 2.0
Stabbed 1.8
Terrorist attack 1.4
Aircraft accident 1.2
Beaten 1.0
Electrocuted 0.6
Strangled 0.6
Bomb-related 0.4
Struck by train 0.4
Horse-related 0.2
Boating accident 0.2
Police officer deaths by cause, 2011-15 annual average

Source: National Law Enforcement Officers Memorial Fund

Trump’s order wouldn’t do anything about accidental deaths, which account for more than half of all on-duty deaths most years, according to FBI data. Traffic-related incidents were the leading cause of officer deaths in 2015 and have been a leading cause for over most of the past decade. A substantial number of law enforcement officers also die each year from job-related illnesses. Police officers have a higher rate of work-related illnesses than most other occupations, with 21 percent of nonfatal injuries and illnesses resulting from overexertion. Some researchers are looking into whether shift length has an impact on officer wellness and safety.

Some experts think other reforms would do more to protect officers. One initiative of former President Barack Obama’s task force on “21st Century Policing,” established in 2014, recommended implementing scientifically supported shift lengths in police departments, boosting officer training and education, and improving body armor to promote officer wellness. Most police groups agree on mandatory use of body armor and seat belts to enhance officer safety.

Trump’s executive order is part of a larger effort to make good on his “law and order” promises from the campaign. In two other executive orders signed the same day, Trump established a task force to address violent crime, illegal immigration and drug trafficking and called for stronger enforcement of federal laws in relation to transnational criminal organizations. And after being sworn in as attorney general, Jeff Sessions pledged to address what he called a “dangerous permanent trend” of rising crime. (There has been a significant increase in murder in many big cities in the past two years, but overall rates of crime remain near multi-decade lows.)

Trump’s approach to criminal justice has drawn mixed reviews from law enforcement. The nomination of Sessions drew strong support from police-officer groups. But other groups have criticized the get-tough approach from Trump and Sessions and have argued that long prison sentences can make crime worse by damaging communities and eroding trust. The Law Enforcement Leaders to Reduce Crime and Incarceration, an organization made up of around 200 current and former police chiefs and other officials, released a five-point brief containing policy suggestions for Trump on crime reduction. The group highlighted community policing, saying that tensions between communities and police had sparked a “false debate” that citizens and politicians have to choose between support for law enforcement or their community.

“It almost becomes a false narrative that you have to choose between Blue Lives Matter laws or reducing injuries with better technology and other solutions,” said Ronal Serpas, the organization’s co-chairman. “When we spend more time and money on research to understand the front end of these relationships between officers and the community, we have much better success in reducing officer injuries.”




Source link

Guest Blog: STEPHEN SENN: ‘Fisher’s alternative to the alternative’

By | ai, bigdata, machinelearning | No Comments
“You May Believe You Are a Bayesian But You Are Probably Wrong”

.

As part of the week of recognizing R.A.Fisher (February 17, 1890 – July 29, 1962), I reblog a guest post by Stephen Senn from 2012.  (I will comment in the comments.)

‘Fisher’s alternative to the alternative’

By: Stephen Senn

[2012 marked] the 50th anniversary of RA Fisher’s death. It is a good excuse, I think, to draw attention to an aspect of his philosophy of significance testing. In his extremely interesting essay on Fisher, Jimmie Savage drew attention to a problem in Fisher’s approach to testing. In describing Fisher’s aversion to power functions Savage writes, ‘Fisher says that some tests are more sensitive than others, and I cannot help suspecting that that comes to very much the same thing as thinking about the power function.’ (Savage 1976) (P473).

The modern statistician, however, has an advantage here denied to Savage. Savage’s essay was published posthumously in 1976 and the lecture on which it was based was given in Detroit on 29 December 1971 (P441). At that time Fisher’s scientific correspondence did not form part of his available oeuvre but in 1990 Henry Bennett’s magnificent edition of Fisher’s statistical correspondence (Bennett 1990) was published and this throws light on many aspects of Fisher’s thought including on significance tests.

.

The key letter here is Fisher’s reply of 6 October 1938 to Chester Bliss’s letter of 13 September. Bliss himself had reported an issue that had been raised with him by Snedecor on 6 September. Snedecor had pointed out that an analysis using inverse sine transformations of some data that Bliss had worked on gave a different result to an analysis of the original values. Bliss had defended his (transformed) analysis on the grounds that a) if a transformation always gave the same result as an analysis of the original data there would be no point and b) an analysis on inverse sines was a sort of weighted analysis of percentages with the transformation more appropriately reflecting the weight of information in each sample. Bliss wanted to know what Fisher thought of his reply.

Fisher replies with a ‘shorter catechism’ on transformations which ends as follows:

A…Have not Neyman and Pearson developed a general mathematical theory for deciding what tests of significance to apply?

B…Their method only leads to definite results when mathematical postulates are introduced, which could only be justifiably believed as a result of extensive experience….the introduction of hidden postulates only disguises the tentative nature of the process by which real knowledge is built up. (Bennett 1990) (p246)

It seems clear that by hidden postulates Fisher means alternative hypotheses and I would sum up Fisher’s argument like this. Null hypotheses are more primitive than statistics: to state a null hypothesis immediately carries an implication about an infinity of test
statistics. You have to choose one, however. To say that you should choose the one with the greatest power gets you nowhere. This power depends on the alternative hypothesis but how will you choose your alternative hypothesis? If you knew that under all circumstances in which the null hypothesis was true you would know which alternative was false you would already know more than the experiment was designed to find out. All that you can do is apply your experience to use statistics, which when employed in valid tests, reject the null hypothesis most often. Hence statistics are more primitive than alternative hypotheses and the latter cannot be made the justification of the former.

I think that this is an important criticism of Fisher’s but not entirely fair. The experience of any statistician rarely amounts to so much that this can be made the (sure) basis for the choice of test. I think that (s)he uses a mixture of experience and argument. I can give an example from my own practice. In carrying out meta-analyses of binary data I have theoretical grounds (I believe) for a prejudice against the risk difference scale and in favour of odds ratios. I think that this prejudice was originally analytic. To that extent I was being rather Neyman-Pearson. However some extensive empirical studies of large collections of meta-analyses have shown that there is less heterogeneity on the odds ratio scale compared to the risk-difference scale. To that extent my preference is Fisherian. However, there are some circumstances (for example where it was reasonably believed that only a small proportion of patients would respond) under which I could be persuaded that the odds ratio was not a good scale. This strikes me as veering towards the N-P.

Nevertheless, I have a lot of sympathy with Fisher’s criticism. It seems to me that what the practicing scientist wants to know is what is a good test in practice rather than what would be a good test in theory if this or that could be believed about the world.

References: 

J. H. Bennett (1990) Statistical Inference and Analysis Selected Correspondence of R.A. Fisher, Oxford: Oxford University Press.

L. J. Savage (1976) On rereading R A Fisher. The Annals of Statistics, 441-500.

Filed under: Fisher, S. Senn, Statistics


Source link