Purpose

From the readings and from your experience, what are the causes of burnout? Have you ever experienced burnout? How did you overcome it or mitigate its effects? As you enter the workforce, how do you plan on preventing burnout or becoming overly stressed and burdened by work and responsibilities?

I realized early on that I’m easily burnt out. It’s happened to me more times than I want to admit, all under different circumstances: schoolwork, internship, side project, even hobby. Contrary to popular belief, I found out long hours is not a burnout trigger for me at all; I used to stay up until 2 or 3 in the morning working, without feeling exhausted or losing focus.

Losing purpose, however, is my trigger.

I’m driven by a sense of purpose. When I start working on a project, I usually have a goal in mind; I’m excited about the project in some way, ready to see where it leads when it’s complete. But over time, especially when I’m completely focused on the project itself, I start to lose sight of the bigger picture, or find the original vision less important than I imagined. If it’s a small side project, I can turn my focus to something else, but if it’s my actual work, then I don’t quite have a choice. Especially when coupled with long hours, when the finish line seems far out of reach, burnout ensues.

Other people seem to feel the same way. I talked to a friend at Apple over the summer, who was struggling with burnout. He was frustrated with his boring yet demanding work project, and told me that he couldn’t see the meaning of the work, especially when other companies were working on more advanced technology. He wanted to break new grounds and work on more cutting-edge technology, but his manager wasn’t supportive. He tried to cope by telling himself that “it’s just a job.” The coping wasn’t quite working.

Burnout is about resentment.” It’s about knowing what matters to you so much, that your current meaningless work makes you resentful.

I’m not going to watch myself turn my work into something meaningless. I’ve been practicing all the valid advices on avoiding burnout—working out, reading, taking vacations. They briefly empty my mind, so when I come back to examine my work, it looks fresh.

But more importantly, I’m trusting that my actual work is going to be important; I’ll solve hard problems and actually make a dent in the world in some way. Worst-case scenario, I can always find something challenging, something that makes a difference, that has purpose. After all, this is how I’ve saved myself from my burnouts, multiple times.

Damn, we’re all alike

  • How much does the Manifesto reflect your individual feelings and thoughts? Is it a warcry? What is it?
  • How much do you identify with the Portrait? Where do you differ?
  • How significant are stereotypes to how you view the world and how the world views you? Do you think the presence of a Manifesto or Portrait is helpful or harmful?

ND Computer Science Manifesto & Portrait by Andrew, Meghan, Shuyang, Zach


I’d like to believe I don’t fit the Notre Dame stereotype. On the surface level, I barely look like my fellow computer science students, and I’m pretty different from most Notre Dame students as well. I’m Asian. I’m international. I’m atheist. I’m gay. I’m liberal. I read copiously. I like to throw on a button-down and cuff my jeans when I walk out of my dorm in the morning. I can barely check any of the standard Notre Dame checkboxes.

I’ve learned to despise these checkboxes—they constantly remind me that I’m different; I don’t really belong. Whenever I’m in a class and we’re doing fun fact as an icebreaker, I can come up with thirteen while my classmates are searching for one; but whenever smalltalk starts with “what football team do you follow,” the only thing I could do is to smile and nod along. I had never heard of the word “touchdown” until freshman year.

It’s brought my fair share of doubts and issues, but in a way, being different is good. I know if I don’t find my position here, I’ll drown quickly. It drives me to work hard, to socialize, to advocate for changes, to blend in. But looking past the façade, I realize I’m not fundamentally different from my peers.

Just like most others in our program, I have never really done anything too uncomfortable. Notre Dame is different and difficult, but it’s predicable, and I knew what I was getting into. Computer science was unexpected for my English-major freshman self, but I knew I’d be good at it. Then getting an internship in a big tech company was the logical next step. Then getting a job in the Valley after graduation.

It’s like my life was plotted out in front of me, and I’m not the only one on this path. No matter where we go, we almost always know what we’re doing, and where the path leads us.

Damn ND CS majors, we’re all alike.

In this sense, our manifesto is a chilling realization. The realization that if we keep being complacent, cuddling ourselves in the bubble we’ve created, we’ll end up living comfortable, predicable, and uninteresting lives. The realization that there are ways to live our lives other than the tried-and-true paths. Consulting jobs in Chicago was the career option up until now, and now Silicon Valley is the career option. They are attractive in their own right, but we’re forgetting the many other colorful possibilities of life.

We’re still young. It’s the best time to dive into the deep end, to swim in the unknowns, to pursue something else for a change.

Something less comfortable.

Why not?

Did you negotiate your contract? Why or why not? On what points did you negotiate and how did that process go? Did you have to sign a NDA or non-compete in the process? What sort of cool perks and bonuses did you get? What do you make of the negotiation process? Is it ethical to ask for more? Is it ethical to challenge or modify the terms of your contract?

Two summers ago, at the end of my internship, I had a conversation with my manager who oversees an organization of 100 people. I asked, somewhat naïvely, “how do you get people to do what you want?”

He pondered, briefly, and said, “everything is a negotiation. Relationships, putting your baby to sleep, even walking your dog is a negotiation. You should read on it.”

Back then, I didn’t fully comprehend the depth to the answer. I guess I was looking for manipulation tactics, so his succinct answer came as a surprise. I only saw the negative connotations associated with negotiation: people assume negotiators are greedy and unethical; but now I realized that everyday negotiations are about relationships and trust; they’re how people come together to get things done.


Compensation negotiation follows the same idea, but is in fact a different game. The numbers are relatively insignificant to the company, but to the candidate they may affect their long-term compensation, since bonuses and raises all depend on the salary. The negotiation itself also reveals how highly the company values the candidate, and how people within the company treat each other.

More importantly, the power dynamic is widely asymmetrical, and the company knows a lot more about compensation and market value than we do. Silicon Valley has a transparency problem with compensation: the twitter #talkpay movement last year uncovered income inequality within the tech industry, and people accused tech companies of exploiting the compensation and negotiation process. This accusation apparently irritated these companies, and the Google employee who started it was unfairly treated as a result. Until we systematically fix this jarring problem, negotiation may be the best move for us.

Logically, not negotiating putting us at a disadvantage. Ethically, we’re only asking to be valued fairly. If we succeed, we’re a lot happier and the company wouldn’t sweat it. If we don’t succeed, we lose nothing. Why would anyone not negotiate?


I was in a good position when I negotiated. I had a few really attractive offers in terms of the work, but I wasn’t very sure how I would be valued by the companies, so I negotiated with all of them, mostly to learn more. All my recruiters expected me to negotiate: they all said I could talk to them if I have concerns over the number, because they value me as an individual and want to make sure I’m comfortable accepting the offer. My negotiations were successful; all companies increased their compensation package significantly. It took me 3 minutes on the phone.

But I gained information by looking at how they responded: Google happily matched their offer with my other offers, Apple was initially reluctant but eventually agreed, and Palantir flew me over to Palo Alto for a day to meet the team and answer my questions, and was very flexible when we discussed compensation.

Palantir’s gesture partially sealed the deal, I think. Again, negotiation is about relationships and trust.


After I signed my contract, Palantir mailed me a box of books, one of which, Getting More, is on negotiation. The book argues that it’s important for everyone to excel in negotiation, because “you can’t get more unless the other side is reasonably satisfied,” and the other party’s knowing how to negotiate is a big step towards meeting their goals.

If we accept a subpar offer and work unhappily, we wouldn’t perform at our best. When we decide to not negotiate, we’re doing a disservice to the company too.

When companies want us to negotiate, why do we, ironically, shy away from it?

Answers to the wrong question

From the readings and from your experience, what is the ethos of the computing industry? That is, what are its core beliefs or guiding principles? How does the computing industry manifest these ideals? Is it successful in maintaining its principles? Discuss whether or not these principles match your own.

The ideal world, according to the tech industry, is Darwinian: all people are born equal, with the same rights and power to earn what they deserve; only the fittest thrives. Leaders of the industry embrace this ideal: Amazon uses fierce competition to drive employee productivity because “[employees] never could have done what they’ve accomplished without [purposeful Darwinism].” Silicon Valley godfather Paul Graham argues that “creating wealth, as a source of economic inequality, is different from taking it,” affirming that rich people deserve properly-made wealth. Modern day venture capitalism is also based purely on it: VC firms invest in tens and hundreds of different ideas, and as long as one pays off, it’s a win. Even the open source community shows signs of it: a select few projects that scratch users’ itches thrive, while others slowly die out without developers’ attention.

And we project the Darwinian ideal towards the outside world, in addition to using it as a guiding principle within. Today’s tech industry, especially the startup scene, is all about providing tools to “empower people and improve the world”. We pride ourselves as a democratizing force. 30 years ago, people need to spend days or months in a library to do any sort of research, if they have access to a library at all; today we have personal computers and Google and everything is just keystrokes away. 30 years ago, people wrote letters to newspapers and delivered speeches on soapboxes to have their voices heard; today we have Facebook and Twitter and a thousand other social networking apps and 140-character’s packed with eloquence. We have gained superpower to summon anything we could possibly want by owning a smartphone.

I have to admit, I find this Darwinian vision captivating. If we weren’t in a meritocratic society, I may not even be able to gain what I have today. As I was watching Master of None, I identified, on an emotional level, with the immigrant parents, who worked incredibly hard and earned the life they’re living. My experience wasn’t nearly as difficult or tear-jerking, but as a “nonresident alien” in the U.S., I know I’m on my own, and I partially credit my effort for standing where I am today. And with the help of technology, I know I could reach even higher.

But I’m privileged, as are the majority of students at Notre Dame, and, I would argue, the majority of tech workers. We have stability that enables us to fully enjoy the technological advancements. We own smartphones and we have access to Google and Facebook and Twitter. We can even program the computer to do as we wish.

While many don’t.

We’re still in a world where people are literally killing each other.

So is this true meritocracy? The Atlantic astutely observed that “American institutions for nurturing merit — such as its system of formal education — are only becoming less and less egalitarian.” The people who succeed are those who have been privileged. “When you know you have a safety net, you are more willing to take risks.”

On a conceptual level, I believe in the Darwinism that the tech industry is endorsing, and my view is rooted in my personal experience; but on a practical level, I recognize its flaw. The basic underlying premise, that we’re an egalitarian society, is false. We have built open-access tools like Google, believing we’re distributing power and democratizing knowledge, and feeling good about ourselves, because these tools are for “everyone.” But giving everyone access doesn’t mean everyone can access them. There are people who don’t have access to internet, people who can’t afford to eat, and people who don’t have a place to sleep at night. To truly become a democratizing force, we need to address their disadvantage, and give them at least a fair game to compete in.

EqualityvsJustice-28500.jpg
source: reddit

In a way, technology is enabling, with which we have indeed gained a lot: access, transparency, convenience, power. But unless our industry collectively addresses the justice problem, I wouldn’t say we are a success. At best, we’re a bunch of middle-class Darwinian a-holes, ignorant of the reality, looking for answers to the wrong question.

The conundrum of ethics

For your second blog post, please write a response to one of the following questions: Why study Ethics in the context of Computer Science and Engineering?

Following the previous post, we computer scientists constantly find ourselves in ethical conundrums, where the better choice is not always obvious and clear-cut. It’s crucial to remember that in these situations, we have the ability to mold technology to conform to our belief of “better,” and it’s important to make an educated, careful choice.

This is not only theoretical. With everyday technology that doesn’t seem controversial, we too often find ourselves questioning their consequences. Twitter seems like an innocuous, convenient communications tool, but it was used for widespread online harassment in the infamous #Gamergate; the U.S. government needs its internal database to function efficiently, but even one breach could expose personal information of millions of ordinary citizens.

I’m joining Palantir after graduation—Palantir is uniquely positioned in the tech world, in that it deals with huge amounts of data and information. Its founder Peter Thiel, once facing the question “is Palantir a front for CIA,” said that “CIA is a front for Palantir.” Palantir prides itself in solving the world’s hardest problems to make it a better place, but working with all kinds of sensitive data, the amount of care Palantir must take is hard to imagine. Inevitably I’ll run into the privacy vs. security tradeoff, and it’s incredibly difficult to declare what’s “better.”

There are harder questions within Palantir, on even larger scales. The New York Times has touched upon some of them,

Should Palantir keep working with the British government, despite its harsh press laws? The contracts continue. Some employees do not want Palantir aiding Israel, because they disagree with its policies toward Palestinians. There are still contracts with the Israeli government. Palantir has decided not to work with China. After an internal debate, the company decided not to do business with tobacco companies.

Palantir’s products do help the United States military kill people, Mr. Karp agrees, but only those with whom the nation is at war. Palantir is “building something for the betterment of the world,” he says, “but not in absence of realities about the world.”

Here, the power of software is tremendous, but how do we use it wisely? George R.R. Martin has famously pondered over the issue of power in his A Song of Ice and Fire saga. He wrote,

In a room sit three great men, a king, a priest, and a rich man with his gold. Between them stands a sellsword, a little man of common birth and no great mind. Each of the great ones bids him slay the other two. “Do it,” says the king, “for I am your lawful ruler.” “Do it,” says the priest, “for I command you in the names of the gods.” “Do it,” says the rich man, “and all this gold shall be yours.”

The king, the priest, the rich man—who lives and who dies? Who will the swordsman obey? It’s a riddle without an answer, or rather, too many answers. All depends on the man with the sword.

Power resides where men believe it resides. No more and no less. [Power] is a mummer’s trick, [a] shadow on the wall… yet shadows can kill. And ofttimes a very small man can cast a very large shadow.

Computer science is our sword, the tool we wield with disproportional leverage that casts a large shadow, and only by studying ethics can we handle it properly, to help rather than to harm.

The pursuit of better

In your first blog post, please write a short introduction to who you are, what your interests are, why you are studying Computer Science (or whatever your major is), and what you hope to get out of this class. Additionally, in your opinion, what are the most pressing ethical and moral issues facing Computer Scientists? Which ones are you particularly interested in discussing this semester?

I’m Shuyang Li, a senior computer science major from Shenzhen, China. I’m interested in technology, design, and data science, and I want to study to find out how these different fields can come together and change individual lives and community interactions for the better.

But it’s hard to define “better.” With technological advancements, we often found ourselves with new power, but without a full understanding of their moral consequences. This has happened in history: with nuclear weapon, Kenneth Bainbridge famously commented, after the first successful test, that “now [they] are all sons of bitches” because they invented nuclear weapon, and indeed, it has brought death en masse.

Computer science is so malleable, that we as computer scientists find ourselves considering moral issues in all kinds of contexts. Can governments perform massive surveillance to protect national security? How could countries protect innocent citizens from data breaches? Should we attempt to develop a sentient AI, without knowing if it’s benevolent? These are all hard questions where the “better solution” isn’t clear cut.

From the class, I want to understand how I can define “better.”