If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.
Why can’t we write list.len()
?
Python is an OO language. To reverse a list, you call lst.reverse()
; to search a list for an element, you call lst.index()
. Even operator expressions are turned into OO method calls: elem in lst
is just lst.__contains__(elem)
.
But a few things that you might expect to be done with methods are instead done with free functions. For example, to get the length of a list (or array or vector or whatever) in different OO languages:1
lst.length
(Ruby)lst.size()
(C++)lst size
(Smalltalk)[lst count]
(Objective C)lst.count
(Swift)lst.length
(JavaScript)lst.size()
(Java)len(lst)
(Python)
One of these things is not like the others.2 In Python, len
is a free function that you call with a collection instance, not a method of collection types that you call on a collection instance. (Of course under the covers, that distinction isn’t as big as it appears, which I’ll get back to later.)
And the list of things that are methods like lst.index(elem)
vs. free functions like len(lst)
seems a bit haphazard. This often confuses novices, while people who move back and forth between Python and another language they’re more familiar with tend to list it as an example of “why language XXX is better than Python”.
As you might expect, this is for historical reasons. This is covered in the official design FAQ:
The major reason is history. Functions were used for those operations that were generic for a group of types and which were intended to work even for objects that didn’t have methods at all (e.g. tuples).
In modern Python, this explanation seems a bit weird—tuple
has methods like tuple.index()
, just like list
, so why couldn’t it have a len
method too? Well, Python wasn’t originally as consistently OO as it is today. Classes were a completely separate thing from types (the type of an instance of class C
wasn’t C
, but instance
),3 and only a handful of types that needed mutating methods, like list.reverse()
, had any methods, and they were implemented very differently from methods of classes.
So, could this be fixed? Sure. Should it be? I’m a lot less sure. But I think it’s worth working through the details to understand what questions arise, and what it would take to make this work, before deciding whether it’s worth considering seriously.
Dunder methods
First, an aside.
In Python, most generic free functions, like operators, actually do their dispatch via a dunder-method protocol—that is, by calling a method named with double-underscore prefix and suffix on the first argument. For example, len(x)
calls x.__len__()
. So, couldn’t we just rename __len__
to len
, leaving the old name as a deprecated legacy alternative (much like, in the other direction, we renamed next
to __next__
)? Or, if that doesn’t work, make lst.len()
automatically fall back to lst.__len__()
?
Unfortunately, that would only work for a small handful of functions; most are more complicated than len
. For example, next(i)
just calls i.__next__()
, but next(i, None)
tries i.__next__()
and returns None
on StopIteration
. So, collapsing next
and __next__
would mean that every iterator type now has to add code to handle the default-value optional parameter, which would be a huge waste of code and effort, or it would mean that users could no longer rely on the default-value argument working with all iterators. Meanwhile, bytes(x)
tries the buffer protocol (not implementable in Python, but you can inherit an implementation from bytes
, bytearray
, etc.), then the __bytes__
method, then the __iter__
protocol, then the old-style sequence __getitem__
protocol. Collapsing bytes
and __bytes__
would mean none of those fallbacks happen. And int()
has both fallback protocols, and an optional parameter that changes which protocols are tried. And so on. So, this idea is a non-starter.
C++
C++ isn’t usually the best language to turn to for inspiration; any C++ ideas that make sense for Python probably also made sense for C#, Swift, Scala, or some other language, and the version in those other languages is usually closer to what Python wants.
But in this case, C++ is one of the few languages that, for its own historical reasons,4 haphazardly mixes free functions and methods. For example, to get the length of a container, you call cont.size()
method, but to get the index of an element, you call find(cont, elem)
(the exact opposite of Python, in these two cases). There was originally a principled distinction, but the first two decades of the language and what would become its standard library changing back and forth to support each other led to a mishmash.
It’s become part of the “modern C++ dogma” that free functions are part of a type’s interface, just like public members. But some things that are part of a type’s interface, you call function-style; others, you call dot-style—and, because there’s no rhyme or reason behind the distinction, you just have to memorize which are which. And, when designing types, and protocols for those types to interact with, you have to pick one arbitrarily, and be careful to be consistent everywhere.
Over the years, people have tried to come up with solutions to fix this inconsistency for C++. Most of the early attempts involved Smalltalk-style “extension methods”: reopening a type (including “native types” like int
and C arrays) to add new methods. Then, you could just reopen the C array type template and add a begin
method, so lst.begin()
now works no matter what type lst
is. Unfortunately, fitting that into a static language like C++ turned out to be difficult.5
In 2014, there were two new proposals to solve this problem by unifying call syntax. Herb Sutter’s N4165 and Bjarne Stroustrup’s N4174 both have interesting motivation sections that are at least partly relevant to Python and other languages, including some of the areas where they differ. Starting with N4474, they switched to a combined proposal, which gets more into C++-specific issues. At any rate, while I’m not sure which one is on the right side for C++, I think Sutter’s definitely is for Python,6 so that’s what I’m going to summarize here.
The basic idea is simple: the call syntax x.f(y)
tries to look up the method x.f
and call it on (y)
, but falls back to looking up the function f
and calling it on (x, y)
. So, for example, you could write lst.len()
; that would fail to find a list.len
method, and then try the len
function, which would work.
Python
So, the basic idea in Python is the same: x.f(y)
tries to look up the method x.f
, and falls back to the function f
.
Of course Python has to do this resolution at runtime, and because it has only generic functions rather than separate automatically-type-switched overloads (unless you use something like singledispatch
to implement overloads on top of generic functions)—but those don’t turn out to be problem.
There are also a number of details that are problems, but I think they’re all solvable.
Locals
When lst.len()
fails to find a len
method on the lst
object, we want it to fall back to calling len(lst)
, so we need to do a lookup on len
.
In Python, the lookup scope is resolved partly at compile time, partly at runtime. See How lookup works for details, but I can summarize the relevant parts inline here: If we’re inside a function body, if that function or an outer nested function has a local variable named len
, compile a local or dereferenced lookup (don’t worry about the details for how it decides which—they’re intuitive, but a bit long to explain here, and not relevant); otherwise, compile a global-or-builtin lookup.
This sounds complicated, but the compiler is already doing this every time you type len(lst)
in a function, so making it do the same thing for fallback is pretty simple. Essentially, we compile result = lst.len()
as if it were:
try:
_meth = lst.len
except AttributeError:
result = len(lst)
else:
result = _meth()
del _meth
What raises on failure?
With the implementation above, if neither lst.len
nor len
exists, we’re going to end up raising a NameError
. We probably wanted an AttributeError
. But that’s easy to handle:
try:
_meth = lst.len
except AttributeError:
try:
result = len(lst)
except NameError:
raise AttributeError(...)
else:
result = _meth()
del _meth
(We don’t have to worry about the lst
raising a NameError
, because if it doesn’t exist, the firt lst.len
would have raised that instead of AttributeError
.)
I’ll ignore this detail in the following sections, but it should be obvious how to add it back in to each version.
Method objects
In Python, functions and methods are both first-class objects. You might not actually call lst.len()
, but instead store lst.len
to call later, or pass it to some other function. So, how does that work?
We definitely don’t want to put off resolution until call time. That would mean lst.len
has to become some complicated object that knows how to look up "len"
both on lst
and in the creating scope (which would also mean if len
were a local it would have to become a cellvar, and so on).
So, what we need to do is put the fallback before the call, and make it give us a method object. Like this:
try:
meth = lst.len
except AttributeError:
meth = make_method_out_of(len, lst)
# and then the call happens later:
meth()
As it turns out, this is pretty much how bound methods already works. That made-up make_method_out_of(len, lst)
exists, and is spelled len.__get__(lst, type(lst))
. The standard function type has a __get__
method that does the right thing. See the Descriptor HowTo Guide for more details, but you don’t really need any more details here. So:
try:
meth = lst.len
except AttributeError:
meth = len.__get__(lst, type(lst))
The only problem here is that most builtin functions, like len
, aren’t of function type, and don’t have a __get__
method.7 One obvious solution is to give normal builtin functions the same descriptor method as builtin unbound methods.8 But this still has a problem: you can construct custom function-like objects in Python just by defining the __call__
method. But those functions won’t work as methods unless you also define __get__
. Which isn’t much of a problem today—but might be more of a problem if people expect them to work for fallback.
Alternatively, we could make the fallback code build explicit method objects (which is what the normal function.__get__
does already):
try:
meth = lst.len
except AttributeError:
try:
meth = len.__get__(lst, type(lst))
except AttributeError:
meth = types.MethodType(len, lst)
I’ll assume in later sections that decide this isn’t necessary and just give builtins the descriptor methods, but it should be obvious how to convert any of the later steps.
Things are slightly more complicated by the fact that you can call class methods on a class object, and look up instance methods on a class object to call them later as unbound methods, and store functions directly in an object dict and call them with the same dot syntax, and so on. For example, we presumably want list.len(lst)
to do the same thing as lst.len()
. The HowTo covers all the details of how things work today, or How methods work for a slightly higher-level version; basically, I think it all works out, with either variation, but I’d have to give it more thought to be sure.
There have been frequent proposals to extend the CPython implementation with a way to do “fast method calls”—basically, x.f()
would use a new instruction to skip building a bound method object just to be called and discarded, and instead directly do what calling the bound method would have done. Such proposals wouldn’t cause any real problem for this proposal. Explaining why requires diving into the details of bytecode and the eval loop, so I won’t bother here.
What about set and delete?
If x.f()
falls back to f.__get__(x, type(x))()
, then shouldn’t x.f = 3
fall back to f.__set__(x, 3)
?
No. For one thing, that would mean that, instead of hooking lookup, we’d be hooking monkeypatching, which is a very silly thing to do. For another, if x
has no member named f
, x.f = 3
doesn’t fail, it just creates a member, so fallback wouldn’t normally happen anyway. And of course the same goes for deletion.
Data members
We want to be able to write x.f()
and have it call f(x)
. But when we’re dealing with data rather than a method, we probably don’t want to be able to write x.sys
and have it return a useless object that looks like a method but, when called, tries to do sys(x)
and raises a TypeError: 'module' object is not callable
.
Fortunately, the way methods work pretty much takes care of this for us. If you try to construct a types.MethodType
with a non-callable
first argument, it raises a TypeError
(which we can handle and turn into an AttributeError
), right there at construction time, not later at call time.
Or, alternatively, we could just make fallback only consider non-data descriptors; if it finds a non-descriptor or a data descriptor, it just raises the AttributeError
immediately.
Namespaces
In C++, when you write f(x)
, the compiler looks for functions named f
in the current namespace, and also in the namespace where the type of x
is defined. This is the basic idea behind argument-dependent lookup. For example:
// things.h
namespace things {
class MyThing { ... };
void myfunc(MyThing) { ... }
}
// stuff.cpp
#include <things.h>
namespace stuff {
void stuffy() {
things::MyThing thing;
things::myfunc(thing); // works
myfunc(thing); // works, means the same
thing.myfunc(); // works under N4165
}
}
In Python, the equivalent doesn’t work:
# things.py
class MyThing: ...
def myfunc(thing: MyThing): ...
# stuff.py
import things
def stuffy():
thing = things.MyThing()
things.myfunc(thing) # works
myfunc(thing) # NameError
thing.myfunc() # AttributeError even with this proposal
You can’t just write myfunc(thing)
, and with the proposed change, you still can’t write thing.myfunc()
.
So, does that make the idea useless? No. Some of the most important functions are builtins. And some important third-party libraries are meant to be used with from spam import *
.
But still, it does mean the idea is less useful.
Could Python implement a simple form of argument-dependent lookup, only for dot-syntax fallback? Sure; it’s actually pretty simple. One option is:
try:
meth = lst.len
except AttributeError:
try:
meth = len.__get__(lst, type(lst))
except NameError:
mod = sys.modules[type(lst).__module__]
mod.len.__get__(lst, type(lst))
This extends the LEGB (local, enclosing, global, builtin) rule to LEGBA (…, argument-dependent) for dot-syntax fallback. If the compiler emitted a FAST
or DEREF
lookup, we don’t need to worry about the A case; if it emitted a GLOBAL
lookup, we only need to worry about it if the global and builtins lookups both failed, in which case we get a NameError
, and we can then do an explicit global lookup on the module that type(lst)
was defined in (and if that too fails, then we get a NameError
).
It isn’t quite this simple, because the module might have been deleted from sys.modules
, etc. There are already rules for dealing with this in cases like pickle
and copy
, and I think applying the same rules here would be perfectly acceptable—or, even simpler, just implement it as above (except obviously turning the KeyError
into an AttributeError
), and you just can’t use argument-dependent lookup to fall back to a function from a module that isn’t available.
Again, I’ll assume we aren’t doing argument-dependent lookup below, but it should be obvious how to change any of the later code to deal with it.
Bytecode details
We could actually compile the lookup and fallback as a sequence of separate bytecode instructions. So, instead of a simple LOAD_ATTR
, we’d do a SETUP_EXCEPT
, and handle AttributeError
by doing a LOAD_FAST
, LOAD_DEREF
, or LOAD_GLOBAL
. This gets a bit verbose to read, and could be a performance problem, but it’s the smallest change, and one that could be done as a bytecode hack in existing Python.
Alternatively, we could add new bytecodes for LOAD_ATTR_OR_FAST
, LOAD_ATTR_OR_DEREF
, and LOAD_ATTR_OR_GLOBAL
, and do all the work inside those new opcode handlers. This gets a bit tricky, because we now have to store the name in two arrays (co_names
for the ATTR
lookup and co_varnames
for the FAST
lookup) and somehow pass both indices as arguments to the opcode. But we could do that with EXTENDED_ARGS
. Or, alternatively, we could just make LOAD_ATTR_OR_FAST
look in co_varnames
, and change the definition of co_names
(which isn’t really publicly documented or guaranteed) so that instead of having globals, attributes, etc., it has globals, attributes that don’t fall back to local/free/cell lookups, etc.
It might seem like the best place to do this would be inside the LOAD_ATTR
opcode handler, but then we’d need some flag to tell it whether to fall back to fast, deref, or global, and we’d also need some way for code that’s inspecting bytecode (like dis
or inspect
) to know that it’s going to do so, at which point we’ve really done the same thing as splitting it into the three opcodes from the previous paragraph.
A simpler version of any of these variants is to always fall back to LOAD_NAME
, which does the LEGB lookup dynamically, by looking in the locals()
dict and then falling back to a global lookup. Since most fallbacks are probably going to be to globals or builtins, and LOAD_NAME
isn’t that much slower than LOAD_GLOBAL
, this doesn’t seem too bad at first—but it is slower. Plus, this would mean that every function that has any attribute lookup and any local or free variables has to be flagged as needing a locals()
dict (normally, one doesn’t get built unless you ask for it explicitly). Still, it might be acceptable for a proof of concept implementation—especially as a variation on the first version.
Lookup overrides
In Python, almost everything is dynamic and hookable. Even the normal attribute lookup rules. When you write x.f
, that’s handled by calling x.__getattribute__(f)
,9 and any type can override __getattribute__
to do something different.
So, if we want types to be able to control or even prevent fallback to free functions, the default fallback has to be handled inside object.__getattribute__
. But, as it turns out (see the next section), that’s very complicated.
Fortunately, it’s not clear it would ever be useful to change the fallback. Let’s say a type wanted to, say, proxy all methods to a remote object, except for methods starting with local_
, where local_spam
would be handled by calling spam
locally. At first glance, that sounds reasonable—but if that spam
falls back to something local, you have to ask, local to what?10 How can the type can decide at runtime, based on local_spam
, that the compiler should have done LEGB resolution on spam
instead of local_spam
at compile time, and then get that retroactively done? There doesn’t seem to be any reasonable and implementable answer.
So, instead, we’ll just do the fallback in the interpreter: if normal lookup (as possibly overridden by __getattribute__
) for x.f
fails, do the f
lookup.
Introspection
In Python, you can write getattr(x, 'f')
, and expect to get the same thing as x.f
.
One option is to make getattr(x, 'f')
not do the fallback and just make it raise. But that seems to defeat the purpose of getattr
. It’s not just about using getattr
(or hasattr
) for LBYL-style coding, in which case you could argue that LBYL is always approximate in Python and if people don’t like it they can just suck it. The problem is that sometimes you actually have an attribute name as a string, and need to look it up dynamically. For example, many RPC servers do something like this:
cmd, args = parse(message)
try:
handler.getattr('do_' + cmd)(*args)
except AttributeError:
handler.do_unknown_command(cmd, *args)
If I wanted to add a do_help
that works on all three of my handlers by writing it as a free function, I’d need getattr
to be able to find it, right?
I’m not sure how important this is. But if getattr
needs fallback code, the obvious is something like this:
try:
return _old_getattr(x, 'f')
except AttributeError as ex:
frame = inspect.currentframe().f_back
try:
return frame.f_locals()('f').__get__(x, type(x))
except KeyError:
pass
try:
return frame.f_globals()('f').__get__(x, type(x))
except KeyError:
raise ex
But doesn’t this have all the same problems I brought up earlier, e.g., with why we can’t expect people to usefully do this in __getattribute__
, or why we can’t do it as the interpreter’s own implementation? Yes, but…
The first issue is performance. That’s generally not important when you’re doing getattr
. People write x.f()
in the middle of inner loops, and expect it to be reasonably fast. (Although when they need it to be really fast, they write xf = x.f()
outside the loop, you shouldn’t have to normally do that.) People don’t write getattr(x, 'f')
in the middle of inner loops. It already takes roughly twice as long for successful lookups, and even longer on failed lookups. So, I don’t think anyone would scream if it also took longer on successful fallback lookups.
The second issue is portability. We can’t require people to use non-portable frame hacks to implement the portable __getattribute__
method properly. But we can use it internally, in the CPython implementation of a builtin. (And other implementations that do things differently can do whatever they need to do differently.)
The final issue is correctness. If f
is a local in the calling function, but hasn’t been assigned to yet, a LOAD_FAST
on f
is going to raise an UnboundLocalError
, but locals()['f']
is going to just raise a KeyError
and our code will fall back to a global lookup, which is bad. But fortunately, getattr
is a builtin function. In CPython, it’s implemented in C, and uses the C API rather than the Python one. And in C, you can actually get to the parent frame’s varnames and locals array directly. Also, I’m not sure this would be a problem. It’s already true for dynamic lookup of locals()['x']
(and, for that matter, eval('x')
), so why shouldn’t it also be true for dynamic lookup of getattr(f, 'x')
?
So, I think this is acceptable here.
At any rate, changing getattr
automatically appropriately takes care of hasattr
, the inspect
module, and anything else that might be affected.
C API
As just mentioned above, the C API, used by builtins and extension modules, and by programs that embed the Python interpreter, is different from the Python API. The way you get the attribute x.f
is by calling PyObject_GetAttrString(x, "f")
(or, if you have 'f'
as a Python string object f
, PyObject_GetAttr(x, f)
). So, does this need to fall back the same way as x.f
?
I don’t think it does. C code generally isn’t running inside a Python frame, and doesn’t even have Python locals, or an enclosing scope, and it only has globals if it manually adds them (and access them) as attributes of the module object it exports to Python. So, fallback to f(x)
doesn’t even really make sense. Plus, C API functions don’t always need to be generic in the same way Python functions do, and they definitely don’t need to be as convenient.
The one problem is that the documentation for PyObject_GetAttr
explicitly says “This is the equivalent of the Python expression o.attr_name
.” (And similarly for the related functions.) But that’s easily fixed: just add “(except that it doesn’t fall back to a method on o
built from looking up attr_name
as a local, global, etc.)” to that sentence in the docs. Or, alternatively, change it to “This is the equivalent of the Python expression o.__getattribute__('attr_name')
“.
This could be a minor headache for something like Cython, that generates C API code out of Pythonesque source. Without looking at how Cython handles local, free, and global variables under the covers, I’m not sure how much of a headache. But it certainly ought to be something that Cython can cope with.
Concrete proposal
The expression obj.attr
will fall back to a bound method on obj
built from function attr
.
Abstractly, this is done by handling AttributeError
from __getattribute__
to look for a binding named attr
and, if it’s bound to a non-data descriptor, calling attr.__get__(obj, type(obj))
and using that as the value. This means that implementations’ builtin and extension functions should now be non-data descriptors, just like normal functions. The getattr
builtin does the same fallback, but it may handle unbound locals the same way that other dynamic (by-name) lookup functions and/or eval
do on a given implementation.
CPython
The compiler will determine whether attr
is a local, freevar, cellvar, or global, just as if the expression had been attr
instead of obj.attr
, and then emit one of the three new opcodes LOAD_ATTR_FAST
, LOAD_ATTR_DEREF
, or LOAD_ATTR_GLOBAL
in place of LOAD_ATTR
. These function like LOAD_ATTR
, except that the name is looked up in the appropriate array (e.g., co_varnames
instead of co_names
for LOAD_ATTR_FAST
), and for the fallback behavior described above. (The exact implementation of the ceval.c
patch will probably be harder to describe in English than to just write.)
Builtin functions gain a __get__
method that binds the function to an instance. (The details are again harder to describe than to write.)
The getattr
function uses the calling frame’s locals and globals to simulate fallback lookup, as described in a previous section (but implemented in C instead of Python, of course).
No changes are made to PyObject_GetAttr
and friends except for the note in the documentation about being the same as o.attr_name
. No changes at all are made to any other public C APIs.
Analysis
I suspect that this change would, overall, actually make Python more confusing rather than less, and might also lead to code written by novices (especially those coming from other languages) being even more inconsistent than today.
But it really is hard to guess in advance. So, is it worth building an implementation to see?
Well, nobody’s going to run a patched interpreter that’s this different from standard Python on enough code to really get a feel for whether this works.
On the other hand, a bytecode hack that can be applied to functions as a decorator, or to modules as an import hook (especially if it can hook the REPL, a la MacroPy) might get some people to play with it. So, even though that sounds like not much less work, and a lot less fun, it’s probably worth doing it that way first, if it’s worth doing anything.
- In some languages, arrays are special, not like other classes, and sometimes this is signaled by a special API. For example, in Java, to get the length of a “native array” you write
lst.length
, but to get the length of an instance of an array class, you writelst.size()
. Most newer languages either do away with the “native types” distinction, like Ruby, or turn it into a user-extensible “value types” distinction, like Swift. ↩ - OK, none of them are like the others—nobody can agree on the name, whether it’s a method or a property (except in those languages where nullary methods and properties are the same thing), etc. But you know what I mean: everyone else has
len
as a method or property; Python has it as a free function. ↩ - In Python 2.1 and earlier, this was true for all classes. Python 2.2 added the option to make a class inherit from a type, like
class C(object): pass
, in which case it was a new-style class, and new-style classes are types. Python 3.0 made all classes new-style (by making them inherit fromobject
if they don’t have any other bases). This adds a number of other benefits besides just unifying the notions of class and type. But there are a number of historical artifacts left behind–e.g.,spam.__class__
vs.type(spam)
. ↩ - … which you don’t want to know about, unless you understand or want to learn about fun things like argument-dependent lookup and the complex overload resolution rules. ↩
- N1742 in 2004 seems to be the last attempt to make it work for C++. Building it into a new static language, on the other hand, is a lot easier; C#, Swift, etc. benefited from all of the failed attempts and just had to avoid making the same choices that had painted C++ into a corner where it couldn’t fit extensions in. ↩
- In Python,
f(x, y)
is not considered part of the public interface ofx
’s type in the same way as in C++. Meanwhile, dynamic lookup makesf(x, y)
auto-completion even worse. And TOOWTDI has a much stronger pull. ↩ - This occasionally comes up even today. For example, you can write a global function
def spam(self, arg):
, and then writespam = spam
inside a class definition, or monkeypatch inC.spam = spam
later, and nowc.spam(2)
works on allC
instances. But if you try that with a builtin function, you get aTypeError: spam() takes exactly 2 arguments (1 given)
. This rarely comes up in practice, and the workarounds aren’t too onerous when they’re rare (e.g.,spam = lambda arg: spam(self, arg)
). But if this fallback became automatic, it would come up all the time, and with no easy workaround, making the fallback feature nearly useless. ↩ - At the Python level, functions and unbound methods are the same type, which has a
__get__
which creates a bound method when called on an object; bound methods have a__get__
that does nothing. But for C builtins, functions and bound methods are the same type (with functions being “bound” to their module),builtin_function_or_method
, which has no__get__
, while unbound methods have a different type,method_descriptor
, which has a__get__
that works like the Python version. The smallest change would be to givebuiltin_function_or_method
a__get__
that rebinds the method. It is occasionally useful that re-binding a Python bound method is a no-op (e.g., so you can dospam = delegate.spam
in a class definition, and that delegate’sself
isn’t overwritten), but I don’t think it would be a problem that re-binding a builtin bound method is different (since today, it raises anAttributeError
, so no code could be relying on it). ↩ - It’s actually a little more complicated than that. See How lookup works for the gory details, or just ignore the details; I’ll cover the bits that are relevant later. ↩
- Of course it could use
inspect.currentframe()
—or, since it’s actually implemented in C, the C API equivalent—to access the locals of the caller. But that doesn’t help with the next part. ↩
View comments