contributing

From Stampy's Wiki
contributing

Canonically answered

I'm interested in working on AI safety. What should I do?

Show your endorsement of this answer by giving it a stamp of approval!

AI Safety Support offers free calls to advise people interested in a career in AI Safety, so that's a great place to start. We're working on creating a bunch of detailed information for Stampy to use, but in the meantime check out these resources:

How can I contribute to Stampy?

Show your endorsement of this answer by giving it a stamp of approval!

If you're not already there, join the Discord where the contributors hang out.

The main ways you can help are to answer questions or add questions, or help to review questions, review answers, or improve answers (instructions for helping out with each of these tasks are on the linked pages). You could also join the dev team if you have programming skills.

Would donating small amounts to AI safety organizations make any significant difference?

Show your endorsement of this answer by giving it a stamp of approval!

Many parts of the AI alignment ecosystem are already well-funded, but a savvy donor can still make a difference by picking up grantmaking opportunities which are too small to catch the attention of the major funding bodies or are based on personal knowledge of the recipient.

One way to leverage a small amount of money to the potential of a large amount is to enter a donor lottery, where you donate to win a chance to direct a much larger amount of money (with probability proportional to donation size). This means that the person directing the money will be allocating enough that it's worth their time to do more in-depth research.

For an overview of the work the major organizations are doing, see the 2021 AI Alignment Literature Review and Charity Comparison. The Long-Term Future Fund seems to be an outstanding place to donate based on that, as they are the organization which most other organizations are most excited to see funded.

How can I join the Stampy dev team?

Show your endorsement of this answer by giving it a stamp of approval!

The development team works on multiple projects in support of Stampy. Currently, these projects include:

  • Stampy UI, which is made mostly in TypeScript.
  • The Stampy Wiki, which is made mostly in PHP and JavaScript.
  • The Stampy Bot, which is made in Python.

However, even if you don’t specialize in any of these areas, do reach out if you would like to help.

To join, please contact our Project Manager, plex. You can reach him on discord at plex#1874. He will be able to point your skills in the right direction to help in the most effective way possible.

Can people contribute to alignment by using proof assistants to generate formal proofs?

Show your endorsement of this answer by giving it a stamp of approval!
80k links to an article on high impact careers in formal verification in the few paragraphs they've written about formal verification.
80k links to an article on high impact careers in formal verification in the few paragraphs they've written about formal verification.

Some other notes

  • https://github.com/deepmind/cartesian-frames I emailed Scott about doing this in coq before this repo was published and he said "I wouldn't personally find such a software useful but sounds like a valuable exercise for the implementer" or something like this.
  • When I mentioned the possibility of rolling some of infrabayesianism in coq to diffractor he wasn't like "omg we really need someone to do that" he was just like "oh that sounds cool" -- I never got around to it, if I would I'd talk to vanessa and diffractor about weakening/particularizing stuff beforehand.
  • if you extrapolate a pattern from those two examples, you start to think that agent foundations is the principle area of interest with proof assistants! and again- does the proof assistant exercise advance the research or provide a nutritious exercise to the programmer?
  • A sketch of a more prosaic scenario in which proof assistants play a role is "someone proposes isInnerAligned : GradientDescent -> Prop and someone else implements a galaxybrained new type theory/tool in which gradient descent is a primitive (whatever that means)", when I mentioned this scenario to Buck he said "yeah if that happened I'd direct all the engineers at redwood to making that tool easier to use", when I mentioned that scenario to Evan about a year ago he said didn't seem to think it was remotely plausible. probably a nonstarter.

At a high level, what is the challenge of alignment that we must meet to secure a good future?

Show your endorsement of this answer by giving it a stamp of approval!

We’re facing the challenge of “Philosophy With A Deadline”.

Many of the problems surrounding superintelligence are the sorts of problems philosophers have been dealing with for centuries. To what degree is meaning inherent in language, versus something that requires external context? How do we translate between the logic of formal systems and normal ambiguous human speech? Can morality be reduced to a set of ironclad rules, and if not, how do we know what it is at all?

Existing answers to these questions are enlightening but nontechnical. The theories of Aristotle, Kant, Mill, Wittgenstein, Quine, and others can help people gain insight into these questions, but are far from formal. Just as a good textbook can help an American learn Chinese, but cannot be encoded into machine language to make a Chinese-speaking computer, so the philosophies that help humans are only a starting point for the project of computers that understand us and share our values.

The field of AI alignment combines formal logic, mathematics, computer science, cognitive science, and philosophy in order to advance that project.

This is the philosophy; the other half of Bostrom’s formulation is the deadline. Traditional philosophy has been going on almost three thousand years; machine goal alignment has until the advent of superintelligence, a nebulous event which may be anywhere from a decades to centuries away.

If the alignment problem doesn’t get adequately addressed by then, we are likely to see poorly aligned superintelligences that are unintentionally hostile to the human race, with some of the catastrophic outcomes mentioned above. This is why so many scientists and entrepreneurs are urging quick action on getting machine goal alignment research up to an adequate level.

If it turns out that superintelligence is centuries away and such research is premature, little will have been lost. But if our projections were too optimistic, and superintelligence is imminent, then doing such research now rather than later becomes vital.

I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?

Show your endorsement of this answer by giving it a stamp of approval!

OK, it’s great that you want to help, here are some ideas for ways you could do so without making a huge commitment:

  • Learning more about AI alignment will provide you with good foundations for any path towards helping. You could start by absorbing content (e.g. books, videos, posts), and thinking about challenges or possible solutions.
  • Getting involved with the movement by joining a local Effective Altruism or LessWrong group, Rob Miles’s Discord, and/or the AI Safety Slack is a great way to find friends who are interested and will help you stay motivated.
  • Donating to organizations or individuals working on AI alignment, possibly via a donor lottery or the Long Term Future Fund, can be a great way to provide support.
  • Writing or improving answers on my wiki so that other people can learn about AI alignment more easily is a great way to dip your toe into contributing. You can always ask on the Discord for feedback on things you write.
  • Getting good at giving an AI alignment elevator pitch, and sharing it with people who may be valuable to have working on the problem can make a big difference. However you should avoid putting them off the topic by presenting it in a way which causes them to dismiss it as sci-fi (dos and don’ts in the elevator pitch follow-up question).
  • Writing thoughtful comments on AI posts on LessWrong.
  • Participating in the AGI Safety Fundamentals program – either the AI alignment or governance track – and then facilitating discussions for it in the following round. The program involves nine weeks of content, with about two hours of readings + exercises per week and 1.5 hours of discussion, followed by four weeks to work on an independent project. As a facilitator, you'll be helping others learn about AI safety in-depth, many of whom are considering a career in AI safety. In the early 2022 round, facilitators were offered a stipend, and this seems likely to be the case for future rounds as well! You can learn more about facilitating in this post from December 2021.

Why might contributing to Stampy be worth my time?

Show your endorsement of this answer by giving it a stamp of approval!

If you're looking for a shovel ready and genuinely useful task to further AI alignment without necessarily committing a large amount of time or needing deep specialist knowledge, we think Stampy is a great option!

Creating a high-quality single point of access where people can be onboarded and find resources around the alignment ecosystem seems likely to be high-impact. So, what makes us the best option?

  1. Unlike all other entry points to learning about alignment, we dodge the trade-off between comprehensiveness and being overwhelmingly long with interactivity (tab explosion in one page!) and semantic search. Single document FAQs can't do this, so we built a system which can.
  2. We have the ability to point large numbers of viewers towards Stampy once we have the content, thanks to Rob Miles and his 100k+ subscribers, so this won't remain an unnoticed curiosity.
  3. Unlike most other entry points, we are open for volunteers to help improve the content.
The main notable one which does is the LessWrong tag wiki, which hosts descriptions of core concepts. We strongly believe in not needlessly duplicating effort, so we're pulling live content from that for the descriptions on our own tag pages, and directing the edit links on those to the edit page on the LessWrong wiki.

You might also consider improving Wikipedia's alignment coverage or the LessWrong wiki, but we think Stampy has the most low-hanging fruit right now. Additionally, contributing to Stampy means being part of a community of co-learners who provide mentorship and encouragement to join the effort to give humanity a bright future. If you're an established researcher or have high-value things to do elsewhere in the ecosystem it might not be optimal to put much time into Stampy, but if you're looking for a way to get more involved it might well be.

What can I do to contribute to AI safety?

Show your endorsement of this answer by giving it a stamp of approval!

It’s pretty dependent on what skills you have and what resources you have access to. The largest option is to pursue a career in AI Safety research. Another large option is to pursue a career in AI policy, which you might think is even more important than doing technical research.

Smaller options include donating money to relevant organizations, talking about AI Safety as a plausible career path to other people or considering the problem in your spare time.

It’s possible that your particular set of skills/resources are not suited to this problem. Unluckily, there are many more problems that are of similar levels of importance.

OK, I’m convinced. How can I help?

Show your endorsement of this answer by giving it a stamp of approval!

Great! I’ll ask you a few follow-up questions to help figure out how you can best contribute, give you some advice, and link you to resources which should help you on whichever path you choose. Feel free to scroll up and explore multiple branches of the FAQ if you want answers to more than one of the questions offered :)

Note: We’re still building out and improving this tree of questions and answers, any feedback is appreciated.

At what level of involvement were you thinking of helping?

Please view and suggest to this google doc for improvements: https://docs.google.com/document/d/1S-CUcoX63uiFdW-GIFC8wJyVwo4VIl60IJHodcRfXJA/edit#

What training programs and courses are available for AGI safety?

Show your endorsement of this answer by giving it a stamp of approval!
  • AGI safety fundamentals (technical and governance) - Is the canonical AGI safety 101 course. 3.5 hours reading, 1.5 hours talking a week w/ facilitator for 8 weeks.
  • Refine - A 3-month incubator for conceptual AI alignment research in London, hosted by Conjecture.
  • AI safety camp - Actually do some AI research. More about output than learning.
  • SERI ML Alignment Theory Scholars Program SERI MATS - Four weeks developing an understanding of a research agenda at the forefront of AI alignment through online readings and cohort discussions, averaging 10 h/week. After this initial upskilling period, the scholars will be paired with an established AI alignment researcher for a two-week ‘research sprint’ to test fit. Assuming all goes well, scholars will be accepted into an eight-week intensive scholars program in Berkeley, California.
  • Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) - Brings together young researchers studying complex and intelligent behavior in natural and social systems.
  • Safety and Control for Artificial General Intelligence - An actual AI Safety university course (UC Berkeley). Touches multiple domains including cognitive science, utility theory, cybersecurity, human-machine interaction, and political science.

See also, this spreadsheet of learning resources.

Non-canonical answers

Where can I find mentorship and advice for becoming a researcher?

Show your endorsement of this answer by giving it a stamp of approval!

There are multiple programmes you can apply to if you want to try becoming a researcher. If accepted to these programs, you will get funding and mentorship. Some examples of these programs are: SERI summer research fellowship, CERI summer research fellowship, SERI ML Alignment Theory Program, and more. A lot of these programs run during specific times of the year (specifically during the summer).

Other examples of things you can do are: join the next iteration of the AGI Safety Fundamentals programme (https://www.eacambridge.org/technical-alignment-curriculum), if you're thinking of a career as a researcher working on AI safety questions you can get 1-1 career advice from 80,000 Hours (https://80000hours.org/speak-with-us), you can apply to attend an EAGx or EAG conference (https://www.eaglobal.org/events/) where you can meet in-person with researchers working on these questions so you can directly ask them for advice.

Some of these resources might be helpful: https://www.aisafetysupport.org/resources/lots-of-links

How can I be a more productive student/researcher?

Show your endorsement of this answer by giving it a stamp of approval!

There are two parts to that answer.

Firstly: By working on the right things. Every generation since the dawn of humanity had it's Einstein-level geniuses. And yet, most of them were forgotten by history because they just didn't run into an important problem to solve.

Secondly: There are a number of useful resources for getting more productive on the internet. Some leads you might find useful:

  • 80.000 hours published an article with an extensive list of evidence-backed strategies for becoming better at any job. Start at the top, and work your way down until you find something that makes sense for you to implement.
  • For general problem-solving, the toolbox taught by CFAR (Center for Applied Rationality) has proven useful to many members of the alignment community. There are two sequences on LessWrong written as self-study guides for the CFAR tools: Hammertime, Training Regime.
  • Keep in mind that no productivity advice whatsoever works for everyone. Something might be useful for 50% of the population, or even 99%, and still leave you worse off if you try to implement it. Experiment, iterate, and above all: Trust your own judgment.

Could I contribute by offering coaching to alignment researchers? If so, how would I go about this?

Show your endorsement of this answer by giving it a stamp of approval!

Yes you can! You can check out AI Safety Support's resources for examples on the format this could take.

If you get good reviews and actually help these researchers, you might eventually get funded by external organisations.

How can I contribute in the area of community building?

Show your endorsement of this answer by giving it a stamp of approval!

In order of smallest commitment to largest:

  1. Link your friends to Stampy or Rob's videos
  2. Join or start a local AI Safety group at a university
  3. Get good at giving an elevator pitch
  4. Become a competent advocate by being convincing and have comprehensive knowledge, to answer follow-up questions

Unanswered canonical questions