contributing

From Stampy's Wiki
contributing

Canonically answered

I'm interested in working on AI safety. What should I do?

Show your endorsement of this answer by giving it a stamp of approval!

AI Safety Support offers free calls to advise people interested in a career in AI Safety, so that's a great place to start. We're working on creating a bunch of detailed information for Stampy to use, but in the meantime check out these resources:

How can I contribute to Stampy?

Show your endorsement of this answer by giving it a stamp of approval!

If you're not already there, join the public Discord or ask for an invite to the semi-private one where contributors generally hang out.

The main ways you can help are to answer questions or add questions, or help to review questions, review answers, or improve answers (instructions for helping out with each of these tasks are on the linked pages). You could also join the dev team if you have programming skills.

Would donating small amounts to AI safety organizations make any significant difference?

Show your endorsement of this answer by giving it a stamp of approval!

Many parts of the AI alignment ecosystem are already well-funded, but a savvy donor can still make a difference by picking up grantmaking opportunities which are too small to catch the attention of the major funding bodies or are based on personal knowledge of the recipient.

One way to leverage a small amount of money to the potential of a large amount is to enter a donor lottery, where you donate to win a chance to direct a much larger amount of money (with probability proportional to donation size). This means that the person directing the money will be allocating enough that it's worth their time to do more in-depth research.

For an overview of the work the major organizations are doing, see the 2021 AI Alignment Literature Review and Charity Comparison. The Long-Term Future Fund seems to be an outstanding place to donate based on that, as they are the organization which most other organizations are most excited to see funded.

Can people contribute to alignment by using proof assistants to generate formal proofs?

Show your endorsement of this answer by giving it a stamp of approval!
80k links to an article on high impact careers in formal verification in the few paragraphs they've written about formal verification.
80k links to an article on high impact careers in formal verification in the few paragraphs they've written about formal verification.

Some other notes

  • https://github.com/deepmind/cartesian-frames I emailed Scott about doing this in coq before this repo was published and he said "I wouldn't personally find such a software useful but sounds like a valuable exercise for the implementer" or something like this.
  • When I mentioned the possibility of rolling some of infrabayesianism in coq to diffractor he wasn't like "omg we really need someone to do that" he was just like "oh that sounds cool" -- I never got around to it, if I would I'd talk to vanessa and diffractor about weakening/particularizing stuff beforehand.
  • if you extrapolate a pattern from those two examples, you start to think that agent foundations is the principle area of interest with proof assistants! and again- does the proof assistant exercise advance the research or provide a nutritious exercise to the programmer?
  • A sketch of a more prosaic scenario in which proof assistants play a role is "someone proposes isInnerAligned : GradientDescent -> Prop and someone else implements a galaxybrained new type theory/tool in which gradient descent is a primitive (whatever that means)", when I mentioned this scenario to Buck he said "yeah if that happened I'd direct all the engineers at redwood to making that tool easier to use", when I mentioned that scenario to Evan about a year ago he said didn't seem to think it was remotely plausible. probably a nonstarter.

At a high level, what is the challenge of alignment that we must meet to secure a good future?

Show your endorsement of this answer by giving it a stamp of approval!

We’re facing the challenge of “Philosophy With A Deadline”.

Many of the problems surrounding superintelligence are the sorts of problems philosophers have been dealing with for centuries. To what degree is meaning inherent in language, versus something that requires external context? How do we translate between the logic of formal systems and normal ambiguous human speech? Can morality be reduced to a set of ironclad rules, and if not, how do we know what it is at all?

Existing answers to these questions are enlightening but nontechnical. The theories of Aristotle, Kant, Mill, Wittgenstein, Quine, and others can help people gain insight into these questions, but are far from formal. Just as a good textbook can help an American learn Chinese, but cannot be encoded into machine language to make a Chinese-speaking computer, so the philosophies that help humans are only a starting point for the project of computers that understand us and share our values.

The field of AI alignment combines formal logic, mathematics, computer science, cognitive science, and philosophy in order to advance that project.

This is the philosophy; the other half of Bostrom’s formulation is the deadline. Traditional philosophy has been going on almost three thousand years; machine goal alignment has until the advent of superintelligence, a nebulous event which may be anywhere from a decades to centuries away.

If the alignment problem doesn’t get adequately addressed by then, we are likely to see poorly aligned superintelligences that are unintentionally hostile to the human race, with some of the catastrophic outcomes mentioned above. This is why so many scientists and entrepreneurs are urging quick action on getting machine goal alignment research up to an adequate level.

If it turns out that superintelligence is centuries away and such research is premature, little will have been lost. But if our projections were too optimistic, and superintelligence is imminent, then doing such research now rather than later becomes vital.

I’m convinced that this is important and want to contribute. What can I do to help?

Show your endorsement of this answer by giving it a stamp of approval!

Great! I’ll ask you a few follow-up questions to help figure out how you can best contribute, give you some advice, and link you to resources which should help you on whichever path you choose. Feel free to scroll up and explore multiple branches of the FAQ if you want answers to more than one of the questions offered :)

Note: We’re still building out and improving this tree of questions and answers, any feedback is appreciated.

At what level of involvement were you thinking of helping?

I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?

Show your endorsement of this answer by giving it a stamp of approval!

OK, it’s great that you want to help, here are some ideas for ways you could do so without making a huge commitment:

  • Learning more about AI alignment will provide you with good foundations for any path towards helping. You could start by absorbing content (e.g. books, videos, posts), and thinking about challenges or possible solutions.
  • Getting involved with the movement by joining a local Effective Altruism or LessWrong group, Rob Miles’s Discord, and/or the AI Safety Slack is a great way to find friends who are interested and will help you stay motivated.
  • Donating to organizations or individuals working on AI alignment, possibly via a donor lottery or the Long Term Future Fund, can be a great way to provide support.
  • Writing or improving answers on my wiki so that other people can learn about AI alignment more easily is a great way to dip your toe into contributing. You can always ask on the Discord for feedback on things you write.
  • Getting good at giving an AI alignment elevator pitch, and sharing it with people who may be valuable to have working on the problem can make a big difference. However you should avoid putting them off the topic by presenting it in a way which causes them to dismiss it as sci-fi (dos and don’ts in the elevator pitch follow-up question).
  • Writing thoughtful comments on AI posts on LessWrong.
  • Participating in the AGI Safety Fundamentals program – either the AI alignment or governance track – and then facilitating discussions for it in the following round. The program involves nine weeks of content, with about two hours of readings + exercises per week and 1.5 hours of discussion, followed by four weeks to work on an independent project. As a facilitator, you'll be helping others learn about AI safety in-depth, many of whom are considering a career in AI safety. In the early 2022 round, facilitators were offered a stipend, and this seems likely to be the case for future rounds as well! You can learn more about facilitating in this post from December 2021.

Why might contributing to Stampy be worth my time?

Show your endorsement of this answer by giving it a stamp of approval!
If you're looking for a shovel ready and genuinely useful task to further AI alignment without necessarily committing a large amount of time or needing deep specialist knowledge, we think Stampy is a great option!

Creating a high-quality single point of access where people can be onboarded and find resources around the alignment ecosystem seems likely high-impact.

Additionally, contributing to Stampy means being part of a community of co-learners who provide mentorship and encouragement to join the effort to give humanity a bight future.
If you're looking for a shovel ready and genuinely useful task to further AI alignment without necessarily committing a large amount of time or needing deep specialist knowledge, we think Stampy is a great option.

Creating a high-quality single point of access where people can be onboarded and find resources around the alignment ecosystem seems likely to be high-impact. So, what makes us the best option?

  1. Unlike all other entry points to learning about alignment, we doge the trade-off between comprehensiveness and being overwhelmingly long with interactivity (tab explosion in one page!) and semantic search. Single document FAQs can't do this, so we built a system which can.
  2. We have the ability to point large numbers of viewers towards Stampy once we have the content, thanks to Rob Miles and his 100k+ subscribers, so this won't remain an unnoticed curiosity.
  3. Unlike most other entry points, we are open for volunteers to help improve the content.
The main notable one which does is the LessWrong tag wiki, which hosts descriptions of core concepts. We strongly believe in not needlessly duplicating effort, so we're pulling live content from that for the descriptions on our own tag pages, and directing the edit links on those to the edit page on the LessWrong wiki.
You might also consider improving Wikipedia's alignment coverage or the LessWrong wiki, but we think Stampy has the most low-hanging fruit right now. Additionally, contributing to Stampy means being part of a community of co-learners who provide mentorship and encouragement to join the effort to give humanity a bight future. If you're an established researcher or have high-value things to do elsewhere in the ecosystem it might not be optimal to put much time into Stampy, but if you're looking for a way to get more involved it might well be.

What can I do to contribute to AI safety?

Show your endorsement of this answer by giving it a stamp of approval!

It’s pretty dependent on what skills you have and what resources you have access to. The largest option is to pursue a career in AI Safety research. Another large option is to pursue a career in AI policy, which you might think is even more important than doing technical research.

Smaller options include donating money to relevant organizations, talking about AI Safety as a plausible career path to other people or considering the problem in your spare time.

It’s possible that your particular set of skills/resources are not suited to this problem. Unluckily, there are many more problems that are of similar levels of importance.

Non-canonical answers

How can I join the Stampy dev team?

Show your endorsement of this answer by giving it a stamp of approval!

The development team works on multiple projects in support of Stampy. Currently, these projects include:

  • Stampy UI, which is made mostly in TypeScript.
  • The Stampy Wiki, which is made mostly in PHP and JavaScript.
  • The Stampy Bot, which is made in Python.

However, even if you don’t specialize in any of these areas, do reach out if you would like to help.

To join, please contact our Project Manager, plex. You can reach him on discord at plex#1874. He will be able to point your skills in the right direction to help in the most effective way possible.

Unanswered canonical questions