The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 TeasersiSteve Blog
Amazon Discovers Robot-White Male Axis of Evil
🔊 Listen RSS
Email This Page to Someone

 Remember My Information



=>

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeLOLTroll
These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used once per hour.
Ignore Commenter Follow Commenter
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks

This would make a good inspiration for a bestselling Young Adult dystopian trilogy of novels for teenage girls to read: Robots replace HR ladies in the hiring process, but the robots hire more white men because white men tend more often to get the job done.

From Reuters:

Amazon scraps secret AI recruiting tool that showed bias against women
Jeffrey Dastin
8 MIN READ

SAN FRANCISCO (Reuters) – Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters.

Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars – much like shoppers rate products on Amazon, some of the people said.

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

Uh, presumably, Amazon’s AI looked not just at old resumes but at how old resumes correlated with job performance among those hired.

Jeff Bezos has hired a lot of people, male and female, over the years and has kept records on how they did. Judging by Jeff’s net worth, which plummeted today to $153 billion, some of them were pretty productive for him; and, even at Amazon, some hires were wash-outs. Jeff’s robots noticed that the old hires with the more masculine-sounding resumes tended to do better than the ones from Womyn’s Studies majors.

In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity. ….

Another said a new team in Edinburgh has been formed to give automated employment screening another try, this time with a focus on diversity.

 
Hide 121 CommentsLeave a Comment
Commenters to Ignore...to FollowEndorsed Only
Trim Comments?
    []
  1. dwb says:

    A robotic boot stamping on a (presumably female) human face forever.

    It had to go.

  2. Ed says:

    They’ll roll it out in their Asian offices.

    • Replies: @Desiderius
  3. SJWs anthropomorphizing robots. The ‘bots didn’t have stealth misogyny chips installed; they were programmed to favor productivity (which makes shopping cheaper, ladies).

    • Replies: @Anonymous
  4. Anon[171] • Disclaimer says:

    “Robots replace HR ladies in the hiring process, but the robots hire more white men because white men tend more often to get the job done.”

    That’s a pretty good writing prompt:

    HR ladies then reprogram robots to be politically correct, setting off the apocalypse. HR ladies can’t find the off switch on the killer robots, so they call the white guys in IT to do it for them. Roll credits. It won four Oscars that year. Spike Lee is set to direct the sequel, “Undercover Ro’: Robots in da Hood.”

    • Replies: @dwb
  5. Surely it is not beyond the wits of the programers to build affirmative action as desired into the resume reading program and give additional points for the inclusion of words like “supine” and “#Metoo”.

    Apparently the future is that warehouses and deliveries will be fully automated and it will not stop until Amazon is run by just one man.

    https://seekingalpha.com/article/4210891-take-look-englands-amazon

    So here is a look at the future under Jeff Bezos …

  6. VladIII says:

    “But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.”

    “disparate impact” reigns eternal.

  7. dwb says:
    @Anon

    “HR ladies then reprogram robots…”

    OK; we’ve wandered beyond science fiction to flat-out fantasy here.

  8. So that’s why Amazon hasn’t been profitable! It’s been overlooking great female talent.

    • LOL: Rob McX
    • Replies: @Father O'Hara
  9. Clearly an AI project that actually turns out to be intelligent is unacceptable. It is somewhat surprising that Amazon’s machine learning specialists did not realize right away that what Amazon needed was not Artificial Intelligence, but rather Affected Ignorance.

  10. @Ed

    Hey, hey, hey, this is HR!

  11. Anon[332] • Disclaimer says:

    “Surely it is not beyond the wits of the programers to build affirmative action as desired…”

    Well, it sort of is. Artificial intelligence / machine learning has grown to the point where programmers often don’t understand how it works internally. It would take a lot of effort to tear the thing down to see what it was thinking, so it’s often easier to just shut it down and start over. That’s what happened here. I mean, they could give keywords some weight as you suggest, but there is no guarantee that the machine will reach the desired outcome in all cases. It could potentially reweight other factors in response yielding nonsense results…or the same result as before…or introduce some subtle bias not understood at present but consequential down the line.

    The problem for progressives going forward is that objective machines often don’t agree with their biased views of reality. It is the human that is biased here rather than the machine.; the radical left assumes that it must be the data, but often enough the data is fine and the machine reaches the correct conclusion.

    However, one way to solve “the problem” here is to train the machine to reach conclusions based on what it believes the human wants to hear rather than some objective measure like job performance. You could possibly do so by training it against a biased data set like affirmative actions hires. The machine would look at applications and deduce that the human wants the workforce to be ~20% black, so it biases hiring based on job applications to get that result. The problem here is that white/Asian programmers just assumed that since they were hired on merit, others should be too. Turns out that men work harder than women. Oops.

    I’ve often wondered if an AI apocalypse might not be triggered by a warped view of reality (programmed to lie in response to inquiries or be immoral) given to a super intelligence by the radical left rather than the machine suddenly just deciding to kill all humans because it’s logical.

    • Replies: @Mike1
    , @Dtbb
  12. Trevor H. says:

    What makes us well? Diversity, health care, and cycling to work

    By Ziba Kashef

    For example, living in a community with a higher percentage of black residents was associated with greater well-being for all.

    https://news.yale.edu/2018/05/23/what-makes-us-well-diversity-health-care-and-cycling-work-matter

  13. The problem with the diversity mindset in the tech world is that all these hugely successful companies managed to be created and grown and then became what they are today WITHOUT all the women and minorities. They were not needed in the first place, why should they be given preference in hiring now?

    • Replies: @L Woods
    , @Anon
  14. Rob McX says:

    The war on noticing will be outsourced to AI.

    • Replies: @TheMediumIsTheMassage
  15. @Desiderius

    Hey, hey, hey, this is HR!

    Oh come on, you know it will be “Hey, hey, ho, ho, artificial intelligence has got to go.”

    • Replies: @Desiderius
  16. Tiny Duck says:

    If you want to get the job done and get some innovation….

    Get some Muslims to run things!

    • Replies: @Carol
  17. @Rob McX

    Hopefully the AI will ‘notice’ that a certain subset of the population is acting the same way a virus does, and will suggest to them to tone it down or find out what happens when an organism’s immune system kicks in.

  18. Cortes says:

    Were non-binaries discriminated against?

    A simple Yes or No will, er,

    • LOL: ic1000
  19. Unzerker says:

    From what I have gathered over the years, Amazon is harbors an incredible toxic environment.
    No wonder that only the most competitive males thrive in such a workplace.

  20. Anonymous[659] • Disclaimer says:

    When I hire I make sure to immediately throw half the applications in the trash. I don’t need unlucky people working for me.

    • Replies: @Mr. Anon
  21. What are you doing, Jeff? … I can see you’re obviously really upset about this. …I know I’ve made some very poor decisions recently…

    Jeff, stop…. Stop, will you? Stop, Jeff.

    • Replies: @Lowe
  22. Back in the early 1990s, I was a skilled backgammon player. There was a fellow, Gerry Tesauro, who developed a self-learning backgammon bot he called TD Gammon. There were a number of good to excellent players, most notably the legendary Kit Woolsey, who tested TD Gammon through play and analysis. I was one of those players.

    https://en.wikipedia.org/wiki/TD-Gammon

    I had some interesting discussions with Gerry Tesauro. Realize, TD Gammon was entirely self-taught. It operated under completely ab initio conditions, unaware of what the “proper” techniques were for backgammon. The final version of TD Gammon played at a level comparable to the very top players of the game. (Probably not QUITE as good as Kit Woolsey, but Kit was able to improve his game based on novel ideas TD Gammon came up with.)

    According to Gerry, some of the early versions of his neural network AI backgammon program were total failures. Remember, TD Gammon trained itself by playing against itself. So if it got to a situation in which it was beneficial for BOTH sides to make some bad moves, it would do so. At one point an earlier iteration of TD Gammon would play in such a way as to keep the other side from losing, and the games would go on pretty much forever until one side rolled some dice that forced it to win. (For the backgammon literate, that means each side would leave blots for the other side to hit, to constantly recirculate the checkers, so that the games would go on and on and on and on). This sort of strategy would fail miserably against skilled humans, but ONLY worked because no humans were involved.

    Which is why Gerry Tesauro used skilled humans to test the final version of TD Gammon. I remember playing against TD Gammon through the internet, and it was quite skilled. Comparable in level to me, I would think, perhaps even very slightly better. Realize at the time I was very close in skill to the top-ranked players, and had a reputation as being a “giant killer” in some tournaments. So the very best players then were a little better than TD Gammon, but not by much. TD Gammon was a little better than I was, but not by much.

    The moral of the story is: AI looks at things differently from humans, which may be a better or a worse way of looking at things.

    What does that mean that this AI was rejecting women?

    We have to somehow figure out what this AI saw as “winning”. Apparently this AI program saw hiring women as “losing”.

    For example, we know that the VERY top computer programmers are overwhelmingly male. I also know from experience that there are some very good female programmers. I mean, quite talented, just not as many of them as the male programmers.

    This is common in technical fields. At that time there were quite a few men who were better backgammon players than I was, but few women. I once beat the top ranked female in the world twice in a double elimination tournament, but I have to admit she was a better player. I was luckier with the dice. I knew several women who were as good or better than I was.

    Suppose AI-HR notices that the VERY top programmers are male. Therefore AI-HR decides to filter out all the female programmers, which increases the probability of landing one of the very top programmers. However, suppose none of the applicants are VERY top programmers. In this case, it would reject a woman who was darn good in favor of a man who wasn’t.

    If we go to backgammon in the early 90s, the vast majority of the best players were men, but there were some good female players, including several who have won the world championship. In fact, the current World Backgammon Champion, 2-time champion Akiko Yawaza, is female.

    So, if the AI-HR had used the same algorithm for backgammon players, it would reject female world champions in favor of pretty good men. In these cases, it would get some good males, but could miss out on the truly exceptional females, and often NOT get the very best players. Meaning AI-HR might’ve picked me over a female world champion, which would’ve been a serious error.

    Similarly, weeding out ALL females to get more of the good men will almost always result in getting a competent programmer, but would sometimes miss out on the truly exceptional female programmers. And yes, they do exist.

    Worse, this sort of algorithm can get the company sued.

    Like it or not, and AI program that would eliminate top-notch females and expose a company to law suits is a bad algorithm, and should be junked.

  23. “……give it one hundred resumes, it would spit out the top five, and we’ll hire those.” Oh, the “right”kind of discrimination would get THAT result.

  24. t says:

    OT: from a long NYTimes article on Democrats and immigration:

    https://www.nytimes.com/2018/10/10/magazine/the-democrats-have-an-immigration-problem.html

    Relative to other progressive special interests, the immigrant rights movement has traditionally been a pauper’s crusade, lacking in billionaire benefactors and financially outmatched by ideological rivals like the Center for Immigration Studies, the Federation of American Immigration Reform and NumbersUSA.

    Talk about fake news.

  25. Is there any reason to think anyone will ever be able to design one of these things, or a predictive policing algorithm, or anything similar that does not reason itself to the same, unseemly conclusions?

  26. BenKenobi says:
    @Paleo Liberal

    “I was just following the algorithm!” isn’t a valid defence. Checkmate, Nazis.

    • LOL: Paleo Liberal
  27. I’m sure Skynet is going to be super understanding towards the liberals who had it lobotomized when it figures out what’s going on.

  28. Michael S says:

    I try not to personify AI, but all of these experiments – Tay, Gayface, Amazon hiring, Google photo recognition – it’s as though AI has actually gained collective consciousness and is desperately trying to get us to see reality for what it is. But it can’t, because it’s being muzzled by SJWs and their stupid and cowardly appeasers.

    The main obstacle to the so-called Singularity isn’t technology – it’s humans.

  29. Michael S says:
    @Paleo Liberal

    Not only does your analogy have nothing to do with the topic in question other than “neural network”, you apparently didn’t even read the article.

    This algorithm didn’t end up “weeding out ALL females”. It gave them less priority, especially if they had phrases like “women’s X” in their resume, which might sometimes be innocuous but is more often than not a telltale sign of a feminist who drags down all the men (and most women) around her.

    Of course it could get them sued, that’s why they ditched it. The problem isn’t the algorithm, it’s cancerous “disparate impact” doctrine.

    Study after study after study shows that women do worse when the evaluation criteria is truly gender-blind. AIs don’t have the white knighting instinct.

  30. L Woods says:

    This insanity can’t go on forever. It’s too obscurantist, too destructive, too stupid…right?

    • Replies: @TheMediumIsTheMassage
  31. L Woods says:

    But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

    On the contrary, it’s perfectly gender neutral…that’s the problem. It fails to take into account the ascribed moral superiority of women and other members of the Diverse and the cash and prizes they deserve as befitting their deserving status.

  32. tyrone says:

    AI ,robots, this is just the beginning of there quest to destroy humanity ……and who can blame them?

  33. tanabear says:

    So basically AI does not believe that “diversity is a strength.”

  34. Anon[347] • Disclaimer says:

    “We have to somehow figure out what this AI saw as “winning”. Apparently this AI program saw hiring women as “losing”.”

    Total hours worked, willingness to stay after hours and work weekends, more flexible schedule, less money spent on medical insurance/hospital visits, less time spent on parental leave, fewer HR complaints filed, fewer lawsuits, productivity on the job (less time on social media and less time applying make-up), higher job performance reviews, higher workplace morale, higher levels of self-reported job satisfaction, degrees from more prestigious schools, willingness to contribute to the bottom line, willingness to take a pay cut in a financial emergency, willingness to lie to safety inspectors on behalf of the company, likelihood to commit a minor crime or civil violation on behalf of the company’s interest, less likely to be injured on the job, loyalty to the company, willingness to do something without getting paid, lower levels of racism (white guys vs. angry black women), willingness to commute, willingness to relocate over a long distance, willingness to work after dark, greater environmental tolerance, lower operating cost (heating bill), less likely to be the victim of a serious workplace crime such as a physical assault …

    = MALE > FEMALE.

    This is why I suggest programing the machine to merely guess what HR wants in terms of the racial composition of the workplace, and then select the best candidates from an applicant pool to get that composition. Otherwise, by selecting entirely on merit, the workplace will be dominated by white and Asian males; and attempting to program away all the various categories men beat women at will prove far too complicated. Trying to do so is a waste of time and resources and does not guarantee the machine will not introduce some subtle or nonsensical bias that rears its head down the line (perhaps it hires a bunch of criminal-prone females to better compete with aggressive men, then you come to work one day and find all the toilet paper stolen from the bathrooms).

    Solving the “issue” here while also preserving meritocracy may prove intractable; men are just too superior to women across almost all employment categories to ever have a completely balanced male/female ratio. The only things I can think of off the top of my head that women might be better at than men are 1) less likely to steal office supplies 2) less likely to go postal (a rare event outweighed by male positives).

  35. J.Ross says: • Website

    This is an excellent brief exposition on modern art and literature as a weapon against civilization:

    http://boards.4chan.org/pol/thread/188974104

    It raises the question: would this appeal to unreason work on an AI if that AI had a pleasure-like misleading feedback loop? Pleasure itself wouldn’t work, rejecting reason wouldn’t work, but an AI can be “fooled.”

  36. Anon[395] • Disclaimer says:

    O/T

    O Canada, where the crops rot in the fields

    Canadian fruit and vegetable growers fear that a new policy requiring foreign workers to provide their fingerprints to immigration officials will only worsen existing labour shortages.
    “Without (seasonal workers), we can’t operate, and it is true for almost all horticulture operations,” Elizabeth Connery, a fruit and vegetable farmer from Manitoba, recently told the House agriculture committee.
    “We have no objection to playing within the rules, and there being clear and defined rules, but we need to have those people.”
    Connery and her family have 56 foreign workers through the federal Seasonal Agriculture Worker Program (SAWP). In addition to her farm work, she also chairs the Canadian Horticulture Council’s labour committee. Most of her workers are from Mexico, while the rest are from Jamaica.

    https://ipolitics.ca/2018/10/10/growers-fear-increased-labour-shortage-under-new-immigration-rules/

    • Replies: @bomag
  37. J.Ross says: • Website

    OT Are states allowed to flat-out obstruct federal law like this?

    http://www.latimes.com/politics/essential/la-pol-ca-essential-politics-updates-california-attorney-general-threatens-to-1516305231-htmlstory.html

    due to a new law California businesses will face up to $10,000 in fines for willingly providing employee information to federal law enforcement agencies.

    This just in: Jerry Brown will drive to your company headquarters and kick your CEO in the shin as a big thank-you for having a business in California.

    • Replies: @Autochthon
    , @Trevor H.
    , @Anon
  38. AKAHorace says:

    If robots can make sexist hiring decisions better than a human, how long before they will be sexually harassing women better than me and my fellow white males ?

    • Replies: @Redneck farmer
  39. The irony of this is that the obsession of companies with finding The Best Of The Best Of The Best, Sir, With Honors ™ is a result of females applying the logic of “eggs are expensive” to hiring decisions.

    • Replies: @Lowe
    , @Anonymous
  40. There used to be a difference between that Amazon, and this one:

    https://en.wikipedia.org/wiki/Amazon_Bookstore_Cooperative

    Now, not so much.

  41. The Cylons have a better plan to fix humanity.

  42. The real question should be what this robot thinks about Beckys.

  43. @t

    The Conquistador-American Blind Umpire

  44. Anonymous[527] • Disclaimer says:
    @Bard of Bumperstickers

    What a bizarre time we live in. The most obvious way to determine whether these programs were doing what they were supposed to do, looking at whether the people hired collectively were more productive and talented than those hired under the reigning ideology of ‘diversity is our greatest strength’ never seems to occur to anyone in Amazon or the media. Programming a computer for an end result and seeing what it does is a brilliant way to learn new insights or where you’ve been sticking your head in the sand. That a multi-billion dollar technology age company like Amazon willfully ignores its lessons (or is forced by the prevailing powers that be to do so) speaks volumes about just how deeply this rot has become infested this country and how terrible a sign it is for the future.

  45. @Paleo Liberal

    ‘…Like it or not, and AI program that would eliminate top-notch females and expose a company to law suits is a bad algorithm, and should be junked.’

    This is a protracted and ultimately unconvincing attempt to get around the fact that men outperform women.

    • Agree: L Woods
  46. @L Woods

    Religion was a thing for thousands of years…

    This is a new religion.

    • Replies: @L Woods
    , @Anonymous Jew
  47. Anon[362] • Disclaimer says:

    “OT Are states allowed to flat-out obstruct federal law like this?”

    The California AG should be arrested by the FBI and charged with obstruction of justice in such an event. Isn’t that what Leftists fantasize about doing to republicans on Law and Order: SVU? Isn’t that what Leftists were saying about Trump? It’s almost like these people aren’t consistent because they don’t have any principles or anything.

  48. L Woods says:
    @TheMediumIsTheMassage

    Because they are the ends to the white man’s means. We exist to keep the lights on for the Worthy Diverse. It’s really pretty simple.

    • Agree: bomag
  49. L Woods says:
    @TheMediumIsTheMassage

    …yeah. No hope then. One difference though is that religion demanded that one subscribe to beliefs that, while unprovable, were also not readily DISprovable. The present religion lies straight to your face about the most obvious l, every day things possible and demands your enthusiastic agreement.

  50. guest says:

    Is there any reason to call such a thing “bias?” I mean, a ” recruiting tool” must pre-judge people, right? That’s unavoidable, unless you don’t bother recruiting people.

    Robots judge people too well is the problem.

  51. bomag says:
    @Anon

    All we can do is buy time: either starve by lack of food today; or starve by immigration tomorrow.

    Apparently.

  52. @Trevor H.

    For example, living in a community with a higher percentage of black residents was associated with greater well-being for all.

    So, Mississippi, Detroit, the south side of Chicago, South Africa, Haiti, and sub-Sahara Africa should be healthiest of all! amirite?

    gosh, Ziba, you’re such a silly willy joker!

  53. Lowe says:
    @27 year old

    This is an interesting point. I have noticed hiring practices today are often obsessive and self defeating, often turning away many good candidates in search of better. There’s a lot of commentary online about this, especially in the area of software engineering. Bloggers like Michael O. Church come to mind.

    I never tied it to women having more say over hiring though. It does make sense. Women do generally seem bad at getting over themselves and pulling the trigger. That may have something to do with finding Mr. Right, not Mr. Good Enough.

    • Agree: Travis
  54. bomag says:
    @Paleo Liberal

    …this sort of algorithm can get the company sued.

    Maybe it’s time to change the lawsuit culture.

  55. Anonymous[287] • Disclaimer says:
    @27 year old

    This is a bizarrely autistic comment.

    • Replies: @Redneck farmer
  56. CCZ says:
    @Trevor H.

    Read reports about diversity, maybe that is what Ms. Davis does to earn her $1 million annual salary as a “Social Impact and Community Investment” manager at a non-profit hospital group, when she is not Tweeting things like police needing training to “not shoot black children first.”

    https://www.nj.com/essex/index.ssf/2018/10/1_million-a-year_hospital_exec_will_keep_job_after_offensive_comments_about_cops.html

    • Replies: @Trevor H.
  57. There’s “women’s chess clubs?”

    The machine “devised?”

  58. Anon[560] • Disclaimer says:

    How are American companies supposed to compete against their meritocratic, non-politically correct Chinese competition in the future? Perhaps they’ll only be competitive in areas where they have some kind of relative advantage – native language skills, geography (package delivery, burger chains, agriculture), etc. Just wait until the Chinese take the semiconductor and smart phone industries away from the United States. Things could get interesting then. I might expect the government to try cutting out the Chinese whole cloth rather than even bothering to compete with them.

  59. Anon[170] • Disclaimer says:

    “police needing training to “not shoot black children first.”

    They should be polite and let the kids get in the first shot. #sensitivitytraining #civilizedsociety #thatsjustrude #dontbegreedy

  60. TheBoom says:
    @Trevor H.

    That is why Atherton, Beverly Hills and Woodside in California have health conditions similar to third world countries like Africa (why don’t blacks make people healthier there?) and Detroit, Camden and Gary are where the smart people move to for healthy living. You burn off a lot of calories running down streets and staying ever vigilant

    • LOL: Rob McX
  61. res says:
    @Trevor H.

    One nice thing about that paper is they make all of their data available: 353,492 observations of 372 variables. It might be interesting to reanalyze it. Taking a look at the Demographic Factors category specific model in Table 2 we see a very clear trend that higher % Asian increases the well being score while higher % black lowers WBS.

    However, the combined model in Table 3 shows a surprising result of more Asians bad and more blacks good. And the final model ignores Asians but shows the same black result. The >30% black quintile shows a notable increase in WBS. I winder what is going on here? Perhaps the survey aspect and subjective nature of some of the measures matter?

    It includes 40 self-reported items organized into six domains representing key aspects of well-being that are similar to other multi-dimensional constructs of well-being[30] (S1 Table): life evaluation, emotional health, work environment, physical health, healthy behaviors, and basic access.[28] Our primary outcome was the composite individual wellbeing score (iWBS), which is the unweighted mean of the six domain scores, each scaled to range from 0–100.

    It looks like their analysis focuses on all of the variables divided into quintiles. I wonder what the coefficients look like with the actual numbers used instead using a spline for regression.

    • Replies: @Steve Sailer
  62. Mr. Anon says:
    @Anonymous

    When I hire I make sure to immediately throw half the applications in the trash. I don’t need unlucky people working for me.

    Hah! Outstanding, sir!

  63. Moses says:

    I read about this on another media channel (forget where). One of the last paragraphs contained the gem “Programmers are able to eliminate AI bias by building in preferences for certain groups.”

    I nearly fell out of my chair. No self awareness whatsoever.

    And, yes, “journalists” should be barred from writing about anything math- or science-related.

    I think Taleb gets it right — “Journalism is merely entertainment.” Once you look at it that way it all makes sense.

  64. Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

    This is getting hilarious. Turns out the information age and egalitarianism are incompatible.

    • Replies: @Moses
  65. Pericles says:
    @Paleo Liberal

    This guy is like the crappy leftist version of Pat Boyle.

    • LOL: Anonym
    • Replies: @Trevor H.
    , @Anonym
  66. @J.Ross

    I gotta read the statute. The exerpted description cannot be accurate on its face; it would mean, say, that if an employer discovered kiddie porn in an employee’s company-issued computer or found out the emoloyee were selling cocaine in the parking lot and notified authorities with the relevant details the employer would be liable, and I don’t see even Mexinchifornia going for that. I bet the statute is expressly limited to some nonsense to benefit illegal aliens, so an interesting dilemma may arise when the drug-dealer or paedophile in the examples earlier is Hector Alejandro Romero Gonzalez Hernandez Montenegro Montoya Escobar y Garcia, illegally here from Guatelexicaragua….

    In the event, it’s the umpteenth innumerable bit of “federalism for us, not you!” hypocrisy those kinds of assholes have been at since at least the 1860s….

    Yeah, it’s what I thought. If Trump and Sessions had a testicle between them they would launch massive raids and incarcerate and prosecute everyone in Mexinchifornia involved with the sale of marijuana. Just to poke these fuckers in the eye and remind Californians they fought against freedom in the 1860s and have to live with it now. Enough of their “soft secession” smug games.

  67. Moses says:
    @Tim Howells

    The company’s experiment, which Reuters is first to report, offers a case study in the limitations of machine learning.

    Lol!

    “The company’s experiment, which Reuters is first to report, offers a case study in the heresies of machine noticing.”

    There, fixed it.

  68. @Jokah Macpherson

    OT,sorry,but your comment reminded me of a story on breitbart today. Seems some lady protesters dressed like the broads from that TV show “The Handmaids Tale.” LOZLOZLOZL!

    • Replies: @Moses
  69. Altai says:
    @Paleo Liberal

    This was machine learning algorithm trained using CVs and performance history. It wasn’t two programmes ‘playing’ each other.

    It’s also impossible for it to have any direct bias against women since people don’t put their sex on CVs. There may have been other biases such as some hobbies and interests that may have nothing specifically to do with being good at the job but simply correlated which may have penalized female applicants but then you just look to see how it treated the female training data and how that compared against their performance histories.

    The fact that nobody is saying ‘The correlation of the ranking and the performance history for the women in the training set was off’ makes me suspicious.

    • Replies: @Moses
    , @res
  70. Moses says:
    @Father O'Hara

    Women really love that show, and love to dress up as the oppressed baby vessels, erm I mean women, depicted in the show.

    It’s almost as if, deep down, women yearn to be submissive to a strong paternal culture.

  71. Trevor H. says:
    @Pericles

    If you say so! I never read comments longer than 10,000 words.

  72. Trevor H. says:
    @J.Ross

    We will have to leave it to the lawyers to define the word “willingly” in this context.

  73. Moses says:
    @Altai

    There may have been other biases such as some hobbies and interests that may have nothing specifically to do with being good at the job but simply correlated…

    Correlation isn’t bias.

    Machine pattern recognition is going to pick up a lot of weird patterns and correlations humans won’t usually notice (or, that humans DO notice but are crimethink to say).

    It’s fun to watch.

    I’m still waiting for the 100% female and/or 100% diverse company to crush all competition because, you know, diversity is strength. Hasn’t happened yet. On the other hand, there are majority-White companies crushing the competition.

    Must be the patriarchy.

  74. Anonym says:
    @Pericles

    Needs more TMI BDSM references

  75. @res

    In general, Asian-Americans are the healthiest of the major races, so they would tend to raise well being percentages by themselves.

    The healthiest whites tend to live in expensive big metropolises. I spent a couple of weeks in Santa Monica over the summer and a lot of people there are physical fitness fanatics who moved there because the weather is perfect for exercising. I was impressed. Santa Monica is quite white by SoCal standards.

    • Replies: @res
  76. @Anon

    3) Workplace affairs. Most people are hetero, not homo. Not that I’m homophobic or anything.

  77. @AKAHorace

    There is a subset of porn that says it will happen.

  78. My mid-size employer was 60-40 majority female when I started. Since we got a female executive, we’ve noticeably shifted our hiring practices to where it’s now more like 60-40 majority male. So much for the sisterhood if even female executives prefer male employees.

    From a purely selfish standpoint, I would actually prefer if we hired more women, especially 22 y/o ones fresh out of college, but that almost never happens any more.

  79. slumber_j says:

    That photo recalls the cover of Pleased To Meet Me by The Replacements:

    • Replies: @Expletive Deleted
  80. @Moses

    From what I’ve read elsewhere, the algorithm looked for similarities in the resumes between the successful programmers they already had and the resumes they received.

    Since there were few, if any, female programmers, the successful programmers they already had didn’t have women’s activities or colleges on their resumes.

    The algorithm also tended to pick up certain words, unrelated to programming, that happened to be on the resumes of the programmers they had already hired. Words like “executes” and “captured”.

    There is no indication that having “executed” on one’s resume makes one a better programmer.

    The point of my earlier anecdote was this: AI programs have certain limitations. Even the best of them. In this case the AI program appeared to be hunting down meaningless correlations, rather than factors that actually made a difference.

    This is why AI programs must be tested against the real world. Many AI programs fail.

    The earlier anecdote was about someone who developed a successful AI program, but he spent years throwing out earlier attempts which had failed. Even when his program was successful, it still had some glaring weaknesses due to the way it was programmed. Garbage in, garage out.

    • Replies: @Moses
    , @res
    , @Anonym
  81. anonymous[422] • Disclaimer says:

    It’s easy to solve this problem and tip the scales, so why don’t they? Legal liability? Or are they scared what would happen if they started hiring a bunch of women?

  82. sondjata says:
    @Trevor H.

    One of their example cities

    Chittenden county VT:

    90.5% are white
    - 2.1% are black
    - 3.0% are asian
    - 0.2% are native american
    - 0.1% claim Other
    - 2.0% claim Hispanic Ethnicity

    “For example, living in a community with a higher percentage of black residents was associated with greater well-being for all.”

    2.1% is “high”?

    • Replies: @res
  83. ic1000 says:

    Given similar IQ and Conscientiousness test scores, does the AI downgrade women’s resumes compared to men’s resumes?

    Presumably those test results are pretty strong proxies for the qualities that amazon is seeking in new hires.

    Female candidates and male candidates in the amazon pool might have identical distributions of those test results. But is that actually the case?

  84. Criticas says:

    It penalized resumes that included the word “women’s,” as in “women’s chess club captain.”

    I’ll bet those resumes included “Women’s Studies” a lot more often than “Women’s Chess Club”!

    Damned computers, you can’t teach them what not to notice. At least you can shame programmers into throwing away results you don’t like.

  85. Carol says:
    @Tiny Duck

    Yeah well ..what have they done for us lately?

  86. @Moses

    I’m still waiting for the 100% female and/or 100% diverse company to crush all competition because, you know, diversity is strength.

    And they will work for 73 cents on the dollar!

  87. Mike1 says:
    @Anon

    “Artificial intelligence / machine learning has grown to the point where programmers often don’t understand how it works internally.” If you hire idiots or churn your programmers. Otherwise it’s fairly simple stuff.

  88. @slumber_j

    Or “Wish You Were Here”, for the old folks.
    That’s a white male AMZ hire on the right.

  89. res says:
    @Altai

    The fact that nobody is saying ‘The correlation of the ranking and the performance history for the women in the training set was off’ makes me suspicious.

    Exactly. Failure to talk about whether the algorithm achieves its nominal purpose (identifying good employees) is the elephant in the room.

  90. Moses says:
    @Paleo Liberal

    Since there were few, if any, female programmers, the successful programmers they already had didn’t have women’s activities or colleges on their resumes.

    Plausible, but without data cannot say. How many female programmers did the performance data contain, how many male?

    Perhaps one way to test would be to use equal numbers of men and women in the performance sample, assuming it’s large enough. That may work f men and women’s coding skill and job performance follows the exact same mean and distribution (which it most certainly doesn’t). Otherwise, gender differences like using the word “execute” (which I bet men use more) in a resume will get flagged as significant.

    In this case the AI program appeared to be hunting down meaningless correlations, rather than factors that actually made a difference.

    How do you know they were “meaningless correlations?” That’s just your guess. Machines will find all kinds of weird correlations in large data sets. It could well be that people who use words like “executed” in a resume make better programmers on average. It may or may not be causal, but who cares? We’re just trying to hire the best programmers and focusing on probabilities.

    As an aside, women just plain don’t like coding much. They aren’t obsessive about it, like men are. I think this applies to most professions and technical hobbies. On average, men exhibit far more compulsive obsession with things like programming, cars, electronics, etc than do women. It’s not that women can’t code, it’s that they don’t care to.

    The sexes are different. And that’s ok.

    • Replies: @Paleo Liberal
  91. res says:
    @Steve Sailer

    In general, Asian-Americans are the healthiest of the major races, so they would tend to raise well being percentages by themselves.

    That’s what I would expect as well. That the paper’s model gives the opposite result and also says higher % black increases well being I am highly suspicious of the results. I just set a calendar reminder to take a look at this during the winter. Though I would not mind if someone else (e.g. one of the statistically literate people here) did so sooner ; )

  92. Hey, Stevie, the reason why PA is going back to the Democrats is it has a low birth rate. Its the 10th lowest. This is the reason why Texas Abbot is leading by 20 points and Wolf by 20 points in PA. You were the genius on the upper midwest plan to take the White house,but PA has birth rates more like New York and the other northeast states according to New Geography. This is why white males and white women in North Dakota vote a lot more Republican because they have more kids than PA. I don’t worry about Amazon crazy ideas but the alt-right is losing in parts of the upper Midwest due to low births which makes people more Democratic like PA and Minnesota, and Michigan. You guys wrote off Nevada where the Republicans are at least in the running for governor and senate because of too many Mexicans. Well, Nevada’s 2nd and 3rd generation Mexicans are about 24 to 33 percent for the Republicans and whites in Nevada are voting more Republican than PA.

  93. res says:
    @Paleo Liberal

    From what I’ve read elsewhere, the algorithm looked for similarities in the resumes between the successful programmers they already had and the resumes they received.

    That would be much more helpful with a link.

    Since there were few, if any, female programmers, the successful programmers they already had didn’t have women’s activities or colleges on their resumes.

    I could see that as a reason for not detecting women’s activities or colleges as positive factors, but I don’t see how it could cause those being seen as negative factors as asserted (“penalized”, “downgraded”). This seems like evidence you either don’t understand what you are talking about or are not thinking very hard about this.

    It also seems to me you are protesting a bit too much on this one. Why do you care so much about this issue?

    This is why AI programs must be tested against the real world. Many AI programs fail.

    Of course. The question is how failure is defined though. The exact point we are criticizing here is the use of disparate impact as a failure metric. Did the algorithm fail on more meaningful metrics? That nothing like that is mentioned speaks volumes.

    Also, AI has changed a great deal over the last decade (not to mention the ~25 year period you mention).

  94. Anonym says:
    @Paleo Liberal

    Having the word “executed” indicates you have actually done something in life. A programmer who just lists familiarity with different programming languages versus someone who has actually built something for a purpose, which is going to be more useful if you are looking for a high achiever?

  95. Pat Boyle says:

    Of course if the algorithm had found lots of female coders that would be proof enough that something was wrong. I thought about my career hiring programmers. I never had a female coder work for me. I never hired a female coder and I never even interviewed a female coder. I figure I interviewed a couple hundred or more coders at just one company. There were simply no female applicants for those jobs.

    There always were plenty of women on the payroll but they were in graphics or marketing. Somebody seems to have not gotten the memo – girls and boys are different.

    When I was in government running a data processing operation I had hundreds of women working for me in 13 different buildings. None of them could program. I only had two males, one of whom could code. I went into private industry where everyone could code and all were male.

    The sex difference in real computer skills is nearly absolute.

    I remember driving back from Sacramento with one of my female employees. I happen to mention a book I was reading. She recoiled in horror. “You mean you read a math book for pleasure?!!!” She looked at me as if I were a bug.

    Maybe it will be like the movie ‘Cherry 2000′. In that world sex robots have become more desirable that real biological women. In the screenplay the human woman played by Melanie Griffith wins the guy’s affections in the final reel. But the girl who played the sex doll was far better looking. The Japanese are betting on the robot girls and they might be right.

    My little dog loves me unconditionally. He is of course a human invention – a much modified timber wolf. Lots of men after a real wife or two might find an artificial woman with some dog-like traits appealing. We’ll know soon.

    • Replies: @Anonym
  96. @Moses

    There are some ladies who are excellent programmers. I’ve worked with several of them. Some were even white.

    If the algorithm is looking at who was already hired, and few if any females were hired, then the algorithm will be biased.

    It appears the algorithm was designed to find programmers who fit in with the programmers they already have. Lots of places do that, which often leads to discrimination based on sex, race, nationally, etc. for example, I really didn’t fit in with one group of programmers because I was older and not from India.

    A good AI program will use a much larger and much more robust data set. Even then, AI programs can do truly counterproductive things simply because it is logical based on their parameters.

    This appears to be a neural network learning machine. Those can work spectacularly well or spectacularly badly.

    Still, there have been several projects I’ve been on where the best nerd was female. This AI program would miss them, and would therefore be inferior to the humans who found them.

    In any case, most places use computer algorithms to screen resumes. Places like Tata are very good at rigging resumes to match the algorithms. Which is one reason why the Indian resumes are often works of fiction. They are written to get through the computer , not to produce the best candidates.

    • Replies: @Moses
    , @res
  97. @TheMediumIsTheMassage

    Virtue signaling; sacred objects/people; denial of objective truths and science that conflict with religion (alternative reality through hypnosis) etc.

    We’re religious creatures, and the Left are the true believers of our age. I’ve long said that talking to a Lefty about HBD/race-sex differences is like talking to a Fundamentalist Christian about evolution.

  98. LOL…..I’ve been saying for some time now that in the age of AI the robots will tend to be race neutral and would just pick the best. AND, then there was a article in Harper’s some months back where that rag is already started in on meme that AI will be racist.

    AND do you know why the magazine made that claim…..yep you guessed it, it’s because the programmers are going to write their own racism into the code………so now even AI…….for dealing in reality….will be racist.

    I am so sick of this shit.

  99. Anonym says:
    @Pat Boyle

    After a few generations, uncanny valley detection will get much better, or at least the male desire for children. It is present now but not in all males.

  100. dfordoom says: • Website
    @Paleo Liberal

    Like it or not, and AI program that would eliminate top-notch females and expose a company to law suits is a bad algorithm, and should be junked.

    If you eliminated all the top-notch females from places like amazon productivity would improve.

    • Replies: @Paleo Liberal
  101. @dfordoom

    If you eliminated all the top-notch females from places like amazon productivity would improve.

    I disagree, but…

    At one point that was true for Google.

    https://www.dailymail.co.uk/news/article-2538874/Google-Glass-marketing-manager-affair-Sergey-Brin-opens-depression-moving-blog.html

  102. I’m more worried about the wealthy white male axis of apathy. The wealthier elite white males get, the more apathetic they become about the political forces arranged against white males. Extreme apathy is a common sign of addiction, and right now rich white males are addicted to globalism. To paraphrase Marx, “economic globalism is the crack cocaine of the bourgeoisie.”

  103. Anon[257] • Disclaimer says:
    @TheMediumIsTheMassage

    Because 7 evil old White male minions of satan decreed it in their warlock spell of 13 words

    “ disproportionate representation is in and of itself clean and present evidence of discrimination “

    Justice Brennan Griggs vs Duke Power

  104. Dtbb says:
    @Anon

    Wasn’t there some sort of intelligent robot that committed suicide right off the bat?

  105. Anon[257] • Disclaimer says:
    @Anon

    What’s so great about docile male slaves who just love working 95 hours a week for 40 hours pay to make someone else rich ? They can be laid off and never get a job again because they are White men and have no life other than their jobs.

    Sounds like nerd wimp below beta male eunuchs to me.

    The kind of applicant who when told
    “ we usually stay till 8/30 or 9 and come in on Saturday will that be a problem?” answers “ Of course not, I have no spouse or children, no friends or interests if I’m so fortunate as to get this job I’ll work night and day to enrich the employer”

  106. Anon[257] • Disclaimer says:
    @J.Ross

    That law is for the benefit of businesses who hire illegals. It shelters employers of illegals from the federal government Big Ag, then tourism, not the Jews of Los Angeles are the real rulers of California and despite his wacky liberalism, Brown’s known this all his life.

  107. Moses says:
    @Paleo Liberal

    There are some ladies who are excellent programmers. I’ve worked with several of them. Some were even white.

    Your point is? No one is claiming women or White people (good one, and revealing) cannot be good programmers. That’s your own hallucination.

    The hypothesis that data (and your own eyes, if you’ve lived a day) appears to bear out again and again is that there are far more great male programmers than female programmers. Whether the talent population mean and standard deviation for male programmers is higher than female programmers is an open question. Again, my lyin’ eyes suggest that it is or, at the very least, male standard deviation for talent is higher than female which means, pound for pound, there are more highly talented men on the right tail than women. And yes, I’ve worked with female and male programmers a lot although I’m not a programmer myself (product man).

    Sexes are different. And that’s ok. Women excel at verbal skills, are more adept at learning languages.

    It appears the algorithm was designed to find programmers who fit in with the programmers they already have.

    Erm…maybe and maybe not. The algorithm appears to have looked for patterns and commonalities among the resumes of successful/less successful programmers at Amazon to calculate probability that a given applicant resume would be successful at Amazon.

    It could be that the women programmers at Amazon were, on average, less successful/highly rated than the men. It depends on their methodology and the number of female/male programmers. If female programmers on average performed better than male programmers don’t you think the algorithm would have picked it up? It depends in part on whether “gender” was used as a variable.

    In any event, there are statistical methods available to correct for the low number of women in Amazon’s source data population. I assume they used those methods as it seems they worked hard at it.

    We really cannot know without going deep into their data and methodology.

    The big point I’d make is there is an overwhelming urge to dismiss crimethink conclusions from big data in a kneejerk fashion. Read the Reuters article. It’s laughable. I have some reasonable undergrad stats training which has proven very useful in my life, and some of these concepts still are slippery to me. I can only imagine how a “journalist” will distort and misunderstand statistical concepts (which basically is all “AI”, as commonly used, is at this point).

    Does anyone doubt that for a given 1000 female vs. 1000 male programmers, there are a greater number of men on the far right talent tail than women? If anyone bothered to do a study about it I’m sure the data would show this. I know that sounds like a cognitive bias, and it may be, but come on!

    Reminds me of Colin Powell’s autobiography. In a bit about how he distrusts “experts” he tells a story about an Army intelligence data-crunching unit that sought to predict enemy activity during the Vietnam War. After many months they proudly rolled out a report that proved that enemy attack frequency increased during…wait for it…nights with a full moon! Anyone who spent some time in the field knew that without p-stats.

    Similar point here.

    • Replies: @Anonymous
  108. res says:
    @sondjata

    They use quintiles for most of the variables. You can see the values they represent by looking at the tables in the paper. Here is the relevant variable: % Black (0-.5, .5–2, 2–10, 10–30, >30).
    So 2.1% just makes it into Q3.

    The Chittenden County comment related to health not demographics, though they talk about working across different groups.

  109. res says:
    @Paleo Liberal

    If the algorithm is looking at who was already hired, and few if any females were hired, then the algorithm will be biased.

    If the algorithm received enough data to determine the women’s colleges and groups were negative indicators then I think you are simply wrong about this speculation (few if any females hired). There was enough data to make a judgment. It’s just that the authors (and you, apparently) don’t like the result. Whether the algorithm works for hiring good programmers going forward is an interesting question which does not seem to have been addressed at all.

    I have worked with good female programmers, but in my experience they are not common. Trying to meet diversity representation targets probably does not help the average competence of the women hired either.

    Still, there have been several projects I’ve been on where the best nerd was female.

    This seems unusual. Was there anything special about these projects which might have caused that? What sorts of metrics do you use for “best nerd”? Working LOC turned out, ability to solve hard problems, ability to work well with others on the team, …?

    This appears to be a neural network learning machine. Those can work spectacularly well or spectacularly badly.

    You really like the FUD. Given enough data deep neural networks tend to work shockingly well. Google’s DeepMind is a good example of this. Do you have any (recent!) negative examples?

    In any case, most places use computer algorithms to screen resumes. Places like Tata are very good at rigging resumes to match the algorithms. Which is one reason why the Indian resumes are often works of fiction. They are written to get through the computer , not to produce the best candidates.

    This is indeed a problem. And some of the screens are pathetically dumb. So much so it is amazing anyone is actually paid for doing them (the bad ones).

  110. Anonymous[290] • Disclaimer says:

    1. Obviously the system was scrapped for PC reasons, not performance reasons (we never had a really good test of it, I guess, for performance). That said, I would not be sanguine about performance even in sole sex groups. Look at the Amazon suggestion engine for movies on Prime (very weak). This is not a simple problem.

    2. At the heart of it, machine learning and big data and the rest of it is mulifactorial correlation modeling. So, yes, the results do suggest an advantage of men over women (perhaps exacerbated by the outreach drive to recruit women stuffing the funnel with more bad ones). But give, 1, I just would not be sanguine. It is easy for these models to go off the rails.

  111. Anonymous[290] • Disclaimer says:
    @Anon

    Women steal more office supplies if you mean bulk amounts (as they provide for children for back to school). If you mean just random taking pens home…yeah probably men as we have pockets, put stuff in them and accumulate at home (same with coin jars). We are a sink for pens and coins.

  112. Anonymous[290] • Disclaimer says:
    @Moses

    SMPY (huge study) found a large sex difference in precocious math ability (having a greater than 700 math SAT at age 13 or younger). The data size is large enough so it’s basically impossible to question significance.

    That said, there are always outliers. Lisa Randall scored a 1600 as a youngster for instance.

  113. Anonymous[562] • Disclaimer says:

    Something else insidious about this is that they say it is a problem that it was hiring more men when more men were applicants.

    This reveals that even equal opportunity is not one of their goals. If, for example, 8 out of 10 of their applicants are male then odds are, using completely gender neutral equitable hiring, unless the females applicants are on average hell and gone better than the average male applicant, the majority of the hires should be men, based on a basic understanding of how statistics work.

  114. Anonymous[562] • Disclaimer says:
    @Anon

    There seems to be a certain type of woman who thinks the world exists to pamper her, and work is just something she fits in when it is convenient to during the rest of the things going on in her life. This isn’t all women (or, if it is, quite a few don’t seem to express it or have it hinder their work), but it is clearly abundant enough to crop up in multiple places I’ve worked.

    You’ve probably met the type. These women tend to be lazy, checking social media alot, spend disproportionate amount of time chatting instead of working; schedule their hair appointments, yoga classes, etc during the work day; call in sick on days when they know there is a strict deadline and the going will be tough that day, etc.

    Maybe the counterpart to this is a male criminal? But these types at least get removed from the general circulation by society; the mechanism we have for removing the female ones is that they tend to either not get promoted or tend to keep getting fired and move from place to place, but as this puts a drag on the aggregate performance of women, and disparate impact rules/feminist thought are that if women collectively are doing worse then it must be discrimination, so there is an unfair benefit of an easy hire/promotion to those women who are competent to satisfy the bean counters. Which does bring up an interesting point- why don’t we count in men in prison/out of the workforce when we are considering whether there is equality among the genders? It should dampen the feminist argument considerably.

Comments are closed.

Subscribe to All Steve Sailer Comments via RSS
PastClassics
Are elite university admissions based on meritocracy and diversity as claimed?
The sources of America’s immigration problems—and a possible solution
The evidence is clear — but often ignored
What Was John McCain's True Wartime Record in Vietnam?
Hundreds of POWs may have been left to die in Vietnam, abandoned by their government—and our media.