- 浏览: 247084 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (127)
- vim (3)
- python (44)
- pymysql (1)
- mysql (9)
- macvim (1)
- erlang (3)
- twisted (0)
- tornado (5)
- django (7)
- postgresql (5)
- sql (1)
- java (7)
- tech (4)
- cache (1)
- lifestyle (3)
- html (1)
- ubuntu (2)
- rabbitmq (1)
- algorithm (8)
- Linux (4)
- Pythonista (1)
- thread (1)
- sort (6)
- 设计模式 (1)
- search (1)
- Unix (6)
- Socket (3)
- C (2)
- web (1)
- gc (1)
- php (10)
- macos (1)
最新评论
-
2057:
这个程序有bug。
查找算法学习之二分查找(Python版本)——BinarySearch -
dotjar:
NB
一个Python程序员的进化[转]
The Zen of Python, by Tim Peters
WhiteSpace 1:
WhiteSpace 2:
Naming:
But try to avoid the __private form. I never use it. Trust me. If you use it, you WILL regret it later.
Long Lines & Continuations:
Keep lines below 80 characters in length.
Use implied line continuation inside parentheses/brackets/braces:
Use backslashes as a last resort:
VeryLong.left_hand_side \
= even_longer.right_hand_side()
Backslashes are fragile; they must end the line they're on. If you add a space after the backslash, it won't work any more. Also, they're ugly.
Docstrings & Comments:
Docstrings = How to use code
Comments = Why (rationale) & how code works
Docstrings explain how to use code, and are for the users of your code. Uses of docstrings:
Comments explain why, and are for the maintainers of your code. Examples include notes to yourself, like:
# !!! BUG: ...
# !!! FIX: This is a hack
# ??? Why is this here?
Both of these groups include you, so write good docstrings and comments!
Docstrings are useful in interactive use (help()) and for auto-documentation systems.
False comments & docstrings are worse than none at all. So keep them up to date! When you make changes, make sure the comments & docstrings are consistent with the code, and don't contradict it.
There's an entire PEP about docstrings, PEP 257, "Docstring Conventions":
http://www.python.org/dev/peps/pep-0257/
Swap Valus:
In other languages:
In Python:
简单的表达式背后,运行的是一个打包解包的过程。赋值符左边的元素分别被赋予右边对应位置的元素的值。实际上,如果要准确的表达这个概念,我们可以把表达式写作:
(a,b) = (b,a)
Perhaps you've seen this before. But do you know how it works?
The right-hand side is unpacked into the names in the tuple on the left-hand side.
Further examples of unpacking:
Useful in loops over structured data:
l (L) above is the list we just made (David's info). So people is a list containing two items, each a 3-item list.
Each item in people is being unpacked into the (name, title, phone) tuple.
Arbitrarily nestable (just be sure to match the structure on the left & right!):
['David', 'Pythonista', '+1-514-555-1234']
Interactive "_"
This is a really useful feature that surprisingly few people know.
In the interactive interpreter, whenever you evaluate an expression or call a function, the result is bound to a temporary name, _ (an underscore):
_ stores the last printed expression.
When a result is None, nothing is printed, so _ doesn't change. That's convenient!
This only works in the interactive interpreter, not within a module.
It is especially useful when you're working out a problem interactively, and you want to store the result for a later step:
Building Strings from Substrings
Start with a list of strings:
colors = ['red', 'blue', 'green', 'yellow']
We want to join all the strings together into one large string. Especially when the number of substrings is large...
Don't do this:
This is very inefficient.
It has terrible memory usage and performance patterns. The "summation" will compute, store, and then throw away each intermediate step.
Instead, do this:
The join() string method does all the copying in one pass.
When you're only dealing with a few dozen or hundred strings, it won't make much difference. But get in the habit of building strings efficiently, because with thousands or with loops, it will make a difference.
Building Strings, Variations 1
Here are some techniques to use the join() string method.
If you want spaces between your substrings:
Or commas and spaces:
Here's a common case:
To make a nicely grammatical sentence, we want commas between all but the last pair of values, where we want the word "or". The slice syntax does the job. The "slice until -1" ([:-1]) gives all but the last value, which we join with comma-space.
Of course, this code wouldn't work with corner cases, lists of length 0 or 1.
Output:
Choose red, blue, green or yellow
Building Strings, Variations 2
If you need to apply a function to generate the substrings:
result = ''.join(fn(i) for i in items)
This involves a generator expression, which we'll cover later.
If you need to compute the substrings incrementally, accumulate them in a list first:
items = []
...
items.append(item) # many times
...
# items is now complete
result = ''.join(fn(i) for i in items)
We accumulate the parts in a list so that we can apply the join string method, for efficiency.
Use in where possible (1)
Good:
Bad:
This is limited to objects with a keys() method.
Use in where possible (2)
But .keys() is necessary when mutating the dictionary:
d.keys() creates a static list of the dictionary keys. Otherwise, you'll get an exception "RuntimeError: dictionary changed size during iteration".
For consistency, use key in dict, not dict.has_key():
# do this:
# not this:
This usage of in is as an operator.
Dictionary get Method
We often have to initialize dictionary entries before use:
This is the naïve way to do it:
dict.get(key, default) removes the need for the test:
Much more direct.
Dictionary setdefault Method (1)
Here we have to initialize mutable dictionary values. Each dictionary value will be a list. This is the naïve way:
Initializing mutable dictionary values:
dict.setdefault(key, default) does the job much more efficiently:
dict.setdefault() is equivalent to "get, or set & get". Or "set if necessary, then get". It's especially efficient if your dictionary key is expensive to compute or long to type.
The only problem with dict.setdefault() is that the default value is always evaluated, whether needed or not. That only matters if the default value is expensive to compute.
If the default value is expensive to compute, you may want to use the defaultdict class, which we'll cover shortly.
Dictionary setdefault Method (2)
Here we see that the setdefault dictionary method can also be used as a stand-alone statement:
setdefault can also be used as a stand-alone statement:
The setdefault dictionary method returns the default value, but we ignore it here. We're taking advantage of setdefault's side effect, that it sets the dictionary value only if there is no value already.
defaultdict
New in Python 2.5.
defaultdict is new in Python 2.5, part of the collections module. defaultdict is identical to regular dictionaries, except for two things:
There are two ways to get defaultdict:
* import the collections module and reference it via the module,
➔
* or import the defaultdict name directly:
➔
import collections
d = collections.defaultdict(...)
from collections import defaultdict
d = defaultdict(...)
Here's the example from earlier, where each dictionary value must be initialized to an empty list, rewritten as with defaultdict:
from collections import defaultdict
There's no fumbling around at all now. In this case, the default factory function is list, which returns an empty list.
This is how to get a dictionary with default values of 0: use int as a default factory function:
You should be careful with defaultdict though. You cannot get KeyError exceptions from properly initialized defaultdict instances. You have to use a "key in dict" conditional if you need to check for the existence of a specific key.
Building & Splitting Dictionaries
Here's a useful technique to build a dictionary from two lists (or sequences): one list of keys, another list of values.
The reverse, of course, is trivial:
Note that the order of the results of .keys() and .values() is different from the order of items when constructing the dictionary. The order going in is different from the order coming out. This is because a dictionary is inherently unordered. However, the order is guaranteed to be consistent (in other words, the order of keys will correspond to the order of values), as long as the dictionary isn't changed between calls.
Testing for Truth Values
It's elegant and efficient to take advantage of the intrinsic truth values (or Boolean values) of Python objects.
Testing a list:
Truth Values
The True and False names are built-in instances of type bool, Boolean values. Like None, there is only one instance of each.
FalseTrue
False (== 0) True (== 1)
"" (empty string) any string but "" (" ", "anything")
0, 0.0
any number but 0 (1, 0.1, -1, 3.14)
[], (), {}, set()
any non-empty container ([0], (None,),[''])
Nonealmost any object that's not explicitly False
Example of an object's truth value:
(Examples: execute truth.py.)
To control the truth value of instances of a user-defined class, use the __nonzero__ or __len__ special methods. Use __len__ if your class is a container which has a length:
If your class is not a container, use __nonzero__:
In Python 3.0, __nonzero__ has been renamed to __bool__ for consistency with the bool built-in type. For compatibility, add this to the class definition:
__bool__ = __nonzero__
Index & Item (1)
Here's a cute way to save some typing if you need a list of words:
Say we want to iterate over the items, and we need both the item's index and the item itself:
- or -
Index & Item (2): enumerate
The enumerate function takes a list and returns (index, item) pairs:
We need use a list wrapper to print the result because enumerate is a lazy function: it generates one item, a pair, at a time, only when required. A for loop is one place that requires one result at a time. enumerate is an example of a generator, which we'll cover in greater detail later. print does not take one result at a time -- we want the entire result, so we have to explicitly convert the generator into a list when we print it.
Our loop becomes much simpler:
The enumerate version is much shorter and simpler than the version on the left, and much easier to read and understand than either.
An example showing how the enumerate function actually returns an iterator (a generator is a kind of iterator):
Default Parameter Values
This is a common mistake that beginners often make. Even more advanced programmers make this mistake if they don't understand Python names.
The problem here is that the default value of a_list, an empty list, is evaluated at function definition time. So every time you call the function, you get the same default value. Try it several times:
Lists are a mutable objects; you can change their contents. The correct way to get a default list (or dictionary, or set) is to create it at run time instead, inside the function:
% String Formatting
Python's % operator works like C's sprintf function.
Although if you don't know C, that's not very helpful. Basically, you provide a template or format and interpolation values.
In this example, the template contains two conversion specifications: "%s" means "insert a string here", and "%i" means "convert an integer to a string and insert here". "%s" is particularly useful because it uses Python's built-in str() function to to convert any object to a string.
The interpolation values must match the template; we have two values here, a tuple.
Output:
Hello David, you have 3 messages
Details are in the Python Library Reference, section 2.3.6.2, "String Formatting Operations". Bookmark this one!
If you haven't done it already, go to python.org, download the HTML documentation (in a .zip file or a tarball), and install it on your machine. There's nothing like having the definitive resource at your fingertips.
Advanced % String Formatting
What many people don't realize is that there are other, more flexible ways to do string formatting:
By name with a dictionary:
Here we specify the names of interpolation values, which are looked up in the supplied dictionary.
Notice any redundancy? The names "name" and "messages" are already defined in the local namespace. We can take advantage of this.
By name using the local namespace:
The locals() function returns a dictionary of all locally-available names.
This is very powerful. With this, you can do all the string formatting you want without having to worry about matching the interpolation values to the template.
But power can be dangerous. ("With great power comes great responsibility.") If you use the locals() form with an externally-supplied template string, you expose your entire local namespace to the caller. This is just something to keep in mind.
To examine your local namespace:
pprint is a very useful module. If you don't know it already, try playing with it. It makes debugging your data structures much easier!
Advanced % String Formatting
The namespace of an object's instance attributes is just a dictionary, self.__dict__.
By name using the instance namespace:
Equivalent to, but more flexible than:
Note: Class attributes are in the class __dict__. Namespace lookups are actually chained dictionary lookups.
List Comprehensions
List comprehensions ("listcomps" for short) are syntax shortcuts for this general pattern:
The traditional way, with for and if statements:
As a list comprehension:
Listcomps are clear & concise, up to a point. You can have multiple for-loops and if-conditions in a listcomp, but beyond two or three total, or if the conditions are complex, I suggest that regular for loops should be used. Applying the Zen of Python, choose the more readable way.
For example, a list of the squares of 0–9:
A list of the squares of odd 0–9:
Generator Expressions (1)
Let's sum the squares of the numbers up to 100:
As a loop:
We can use the sum function to quickly do the work for us, by building the appropriate sequence.
As a list comprehension:
As a generator expression:
Generator expressions ("genexps") are just like list comprehensions, except that where listcomps are greedy, generator expressions are lazy. Listcomps compute the entire result list all at once, as a list. Generator expressions compute one value at a time, when needed, as individual values. This is especially useful for long sequences where the computed list is just an intermediate step and not the final result.
In this case, we're only interested in the sum; we don't need the intermediate list of squares. We use xrange for the same reason: it lazily produces values, one at a time.
Generator Expressions (2)
For example, if we were summing the squares of several billion integers, we'd run out of memory with list comprehensions, but generator expressions have no problem. This does take time, though!
The difference in syntax is that listcomps have square brackets, but generator expressions don't. Generator expressions sometimes do require enclosing parentheses though, so you should always use them.
Rule of thumb:
Here's a recent example I saw at work.
➔
We needed a dictionary mapping month numbers (both as string and as integers) to month codes for futures contracts. It can be done in one logical line of code.
➔
The way this works is as follows:
* The dict() built-in takes a list of key/value pairs (2-tuples).
* We have a list of month codes (each month code is a single letter, and a string is also just a list of letters). We enumerate over this list to get both the month code and the index.
* The month numbers start at 1, but Python starts indexing at 0, so the month number is one more than the index.
* We want to look up months both as strings and as integers. We can use the int() and str() functions to do this for us, and loop over them.
Recent example:
month_codes result:
{ 1: 'F', 2: 'G', 3: 'H', 4: 'J', ...
'1': 'F', '2': 'G', '3': 'H', '4': 'J', …}
Sorting
It's easy to sort a list in Python:
a_list.sort()
(Note that the list is sorted in-place: the original list is sorted, and the sort method does not return the list or a copy.)
But what if you have a list of data that you need to sort, but it doesn't sort naturally (i.e., sort on the first column, then the second column, etc.)? You may need to sort on the second column first, then the fourth column.
We can use list's built-in sort method with a custom function:
This works, but it's extremely slow for large lists.
Sorting with DSU *
DSU = Decorate-Sort-Undecorate
* Note: DSU is often no longer necessary. See the next section, Sorting With Keys for the new approach.
Instead of creating a custom comparison function, we create an auxiliary list that will sort naturally:
# Decorate:
# Sort:
# Undecorate:
The first line creates a list containing tuples: copies of the sort terms in priority order, followed by the complete data record.
The second line does a native Python sort, which is very fast and efficient.
The third line retrieves the last value from the sorted list. Remember, this last value is the complete data record. We're throwing away the sort terms, which have done their job and are no longer needed.
This is a tradeoff of space and complexity against time. Much simpler and faster, but we do need to duplicate the original list.
Sorting With Keys
Python 2.4 introduced an optional argument to the sort list method, "key", which specifies a function of one argument that is used to compute a comparison key from each list element. For example:
The function my_key will be called once for each item in the to_sort list.
You can make your own key function, or use any existing one-argument function if applicable:
* str.lower to sort alphabetically regarless of case.
* len to sort on the length of the items (strings or containers).
* int or float to sort numerically, as with numeric strings like "2", "123", "35".
Generators
We've already seen generator expressions. We can devise our own arbitrarily complex generators, as functions:
The yield keyword turns a function into a generator. When you call a generator function, instead of running the code immediately Python returns a generator object, which is an iterator; it has a next method. for loops just call the next method on the iterator, until a StopIteration exception is raised. You can raise StopIteration explicitly, or implicitly by falling off the end of the generator code as above.
Generators can simplify sequence/iterator handling, because we don't need to build concrete lists; just compute one value at a time. The generator function maintains state.
This is how a for loop really works. Python looks at the sequence supplied after the in keyword. If it's a simple container (such as a list, tuple, dictionary, set, or user-defined container) Python converts it into an iterator. If it's already an iterator, Python uses it directly.
Then Python repeatedly calls the iterator's next method, assigns the return value to the loop counter (i in this case), and executes the indented code. This is repeated over and over, until StopIteration is raised, or a breakstatement is executed in the code.
A for loop can have an else clause, whose code is executed after the iterator runs dry, but not after a break statement is executed. This distinction allows for some elegant uses. else clauses are not always or often used onfor loops, but they can come in handy. Sometimes an else clause perfectly expresses the logic you need.
For example, if we need to check that a condition holds on some item, any item, in a sequence:
Example Generator
Filter out blank rows from a CSV reader (or items from a list):
Reading Lines From Text/Data Files
This is possible because files support a next method, as do other iterators: lists, tuples, dictionaries (for their keys), generators.
There is a caveat here: because of the way the buffering is done, you cannot mix .next & .read* methods unless you're using Python 2.5+.
EAFP vs. LBYL
It's easier to ask forgiveness than permission
Look before you leap
Generally EAFP is preferred, but not always.
* Duck typing
If it walks like a duck, and talks like a duck, and looks like a duck: it's a duck. (Goose? Close enough.)
* Exceptions
Use coercion if an object must be a particular type. If x must be a string for your code to work, why not call
instead of trying something like
EAFP try/except Example
You can wrap exception-prone code in a try/except block to catch the errors, and you will probably end up with a solution that's much more general than if you had tried to anticipate every possibility.
Note: Always specify the exceptions to catch. Never use bare except clauses. Bare except clauses will catch unexpected exceptions, making your code exceedingly difficult to debug.
Importing
from module import *
You've probably seen this "wild card" form of the import statement. You may even like it. Don't use it.
To adapt a well-known exchange:
(Exterior Dagobah, jungle, swamp, and mist.)
LUKE: Is from module import * better than explicit imports?
YODA: No, not better. Quicker, easier, more seductive.
LUKE: But how will I know why explicit imports are better than the wild-card form?
YODA: Know you will when your code you try to read six months from now.
Wild-card imports are from the dark side of Python.
Never!
The from module import * wild-card style leads to namespace pollution. You'll get things in your local namespace that you didn't expect to get. You may see imported names obscuring module-defined local names. You won't be able to figure out where certain names come from. Although a convenient shortcut, this should not be in production code.
Moral: don't use wild-card imports!
➔
It's much better to:
* reference names through their module (fully qualified identifiers),
➔
* import a long module using a shorter name (alias; recommended),
➔
* or explicitly import just the names you need.
➔
Namespace pollution alert!
Instead,
Reference names through their module (fully qualified identifiers):
import module
module.name
Or import a long module using a shorter name (alias):
import long_module_name as mod
mod.name
Or explicitly import just the names you need:
from module import name
name
Note that this form doesn't lend itself to use in the interactive interpreter, where you may want to edit and "reload()" a module.
Modules & Scripts
To make a simultaneously importable module and executable script:
When imported, a module's __name__ attribute is set to the module's file name, without ".py". So the code guarded by the if statement above will not run when imported. When executed as a script though, the __name__attribute is set to "__main__", and the script code will run.
Except for special cases, you shouldn't put any major executable code at the top-level. Put code in functions, classes, methods, and guard it with if __name__ == '__main__'.
Module Structure
"""module docstring"""
# imports
# constants
# exception classes
# interface functions
# classes
# internal functions & classes
def main(...):
...
if __name__ == '__main__':
status = main()
sys.exit(status)
This is how a module should be structured.
Command-Line Processing
Example: cmdline.py:
Packages
package/
__init__.py
module1.py
subpackage/
__init__.py
module2.py
* Used to organize your project.
* Reduces entries in load-path.
* Reduces import name conflicts.
Example:
import package.module1
from package.subpackage import module2
from package.subpackage.module2 import name
In Python 2.5 we now have absolute and relative imports via a future import:
from __future__ import absolute_import
I haven't delved into these myself yet, so we'll conveniently cut this discussion short.
Simple is Better Than Complex
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
—Brian W. Kernighan, co-author ofThe C Programming Languageand the "K" in "AWK"
In other words, keep your programs simple!
Don't reinvent the wheelBefore writing any code,
➔ ➔ ➔ ➔
* Check Python's standard library.
* Check the Python Package Index (the "Cheese Shop"):
http://cheeseshop.python.org/pypi
* Search the web. Google is your friend.
原文链接:
http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html
Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!
WhiteSpace 1:
- 4 spaces per indentation level
- No hard tables
- Never mix tabs and spaces
- One blank line between functions.
- Two blank lines between classes.
WhiteSpace 2:
- Add a space after "," in dicts, lists, tuples, & argument lists, and after ":" in dicts, but not before.
- Put spaces around assignments & comparisons (except in argument lists).
- No spaces just inside parentheses or just before argument lists.
- No spaces just inside docstrings.
Naming:
- joined_lower for functions, methods, attributes
- joined_lower or ALL_CAPS for constants
- StudlyCaps for classes
- camelCase only to conform to pre-existing conventions
- Attributes: interface, _internal, __private
But try to avoid the __private form. I never use it. Trust me. If you use it, you WILL regret it later.
Long Lines & Continuations:
Keep lines below 80 characters in length.
Use implied line continuation inside parentheses/brackets/braces:
def __init__(self, first, second, third, fourth, fifth, sixth): output = (first + second + third + fourth + fifth + sixth)
Use backslashes as a last resort:
VeryLong.left_hand_side \
= even_longer.right_hand_side()
Backslashes are fragile; they must end the line they're on. If you add a space after the backslash, it won't work any more. Also, they're ugly.
Docstrings & Comments:
Docstrings = How to use code
Comments = Why (rationale) & how code works
Docstrings explain how to use code, and are for the users of your code. Uses of docstrings:
- Explain the purpose of the function even if it seems obvious to you, because it might not be obvious to someone else later on.
- Describe the parameters expected, the return values, and any exceptions raised.
- If the method is tightly coupled with a single caller, make some mention of the caller (though be careful as the caller might change later).
Comments explain why, and are for the maintainers of your code. Examples include notes to yourself, like:
# !!! BUG: ...
# !!! FIX: This is a hack
# ??? Why is this here?
Both of these groups include you, so write good docstrings and comments!
Docstrings are useful in interactive use (help()) and for auto-documentation systems.
False comments & docstrings are worse than none at all. So keep them up to date! When you make changes, make sure the comments & docstrings are consistent with the code, and don't contradict it.
There's an entire PEP about docstrings, PEP 257, "Docstring Conventions":
http://www.python.org/dev/peps/pep-0257/
Swap Valus:
In other languages:
temp = a a = b b = temp
In Python:
b, a = a, b
简单的表达式背后,运行的是一个打包解包的过程。赋值符左边的元素分别被赋予右边对应位置的元素的值。实际上,如果要准确的表达这个概念,我们可以把表达式写作:
(a,b) = (b,a)
Perhaps you've seen this before. But do you know how it works?
- The comma is the tuple constructor syntax.
- A tuple is created on the right (tuple packing).
- A tuple is the target on the left (tuple unpacking).
The right-hand side is unpacked into the names in the tuple on the left-hand side.
Further examples of unpacking:
>>> l =['David', 'Pythonista', '+1-514-555-1234'] >>> name, title, phone = l >>> name 'David' >>> title 'Pythonista' >>> phone '+1-514-555-1234'
Useful in loops over structured data:
l (L) above is the list we just made (David's info). So people is a list containing two items, each a 3-item list.
>>> people = [l, ['Guido', 'BDFL', 'unlisted']] >>> for (name, title, phone) in people: ... print name, phone ... David +1-514-555-1234 Guido unlisted
Each item in people is being unpacked into the (name, title, phone) tuple.
Arbitrarily nestable (just be sure to match the structure on the left & right!):
>>> david, (gname, gtitle, gphone) = people >>> gname 'Guido' >>> gtitle 'BDFL' >>> gphone 'unlisted' >>> david
['David', 'Pythonista', '+1-514-555-1234']
Interactive "_"
This is a really useful feature that surprisingly few people know.
In the interactive interpreter, whenever you evaluate an expression or call a function, the result is bound to a temporary name, _ (an underscore):
>>> 1 + 1 2 >>> _ 2
_ stores the last printed expression.
When a result is None, nothing is printed, so _ doesn't change. That's convenient!
This only works in the interactive interpreter, not within a module.
It is especially useful when you're working out a problem interactively, and you want to store the result for a later step:
>>> import math >>> math.pi / 3 1.0471975511965976 >>> angle = _ >>> math.cos(angle) 0.50000000000000011 >>> _ 0.50000000000000011
Building Strings from Substrings
Start with a list of strings:
colors = ['red', 'blue', 'green', 'yellow']
We want to join all the strings together into one large string. Especially when the number of substrings is large...
Don't do this:
result = '' for s in colors: result += s
This is very inefficient.
It has terrible memory usage and performance patterns. The "summation" will compute, store, and then throw away each intermediate step.
Instead, do this:
result = ''.join(colors)
The join() string method does all the copying in one pass.
When you're only dealing with a few dozen or hundred strings, it won't make much difference. But get in the habit of building strings efficiently, because with thousands or with loops, it will make a difference.
Building Strings, Variations 1
Here are some techniques to use the join() string method.
If you want spaces between your substrings:
result = ' '.join(colors)
Or commas and spaces:
result = ', '.join(colors)
Here's a common case:
colors = ['red', 'blue', 'green', 'yellow'] print 'Choose', ', '.join(colors[:-1]), \ 'or', colors[-1]
To make a nicely grammatical sentence, we want commas between all but the last pair of values, where we want the word "or". The slice syntax does the job. The "slice until -1" ([:-1]) gives all but the last value, which we join with comma-space.
Of course, this code wouldn't work with corner cases, lists of length 0 or 1.
Output:
Choose red, blue, green or yellow
Building Strings, Variations 2
If you need to apply a function to generate the substrings:
result = ''.join(fn(i) for i in items)
This involves a generator expression, which we'll cover later.
If you need to compute the substrings incrementally, accumulate them in a list first:
items = []
...
items.append(item) # many times
...
# items is now complete
result = ''.join(fn(i) for i in items)
We accumulate the parts in a list so that we can apply the join string method, for efficiency.
Use in where possible (1)
Good:
for key in d: print key
- in is generally faster.
- This pattern also works for items in arbitrary containers (such as lists, tuples, and sets).
- in is also an operator (as we'll see).
Bad:
for key in d.keys(): print key
This is limited to objects with a keys() method.
Use in where possible (2)
But .keys() is necessary when mutating the dictionary:
for key in d.keys(): d[str(key)] = d[key]
d.keys() creates a static list of the dictionary keys. Otherwise, you'll get an exception "RuntimeError: dictionary changed size during iteration".
For consistency, use key in dict, not dict.has_key():
# do this:
if key in d: ...do something with d[key]
# not this:
if d.has_key(key): ...do something with d[key]
This usage of in is as an operator.
Dictionary get Method
We often have to initialize dictionary entries before use:
This is the naïve way to do it:
navs = {} for (portfolio, equity, position) in data: if portfolio not in navs: navs[portfolio] = 0 navs[portfolio] += position * prices[equity]
dict.get(key, default) removes the need for the test:
navs = {} for (portfolio, equity, position) in data: navs[portfolio] = (navs.get(portfolio, 0) + position * prices[equity])
Much more direct.
Dictionary setdefault Method (1)
Here we have to initialize mutable dictionary values. Each dictionary value will be a list. This is the naïve way:
Initializing mutable dictionary values:
equities = {} for (portfolio, equity) in data: if portfolio in equities: equities[portfolio].append(equity) else: equities[portfolio] = [equity]
dict.setdefault(key, default) does the job much more efficiently:
equities = {} for (portfolio, equity) in data: equities.setdefault(portfolio, []).append( equity)
dict.setdefault() is equivalent to "get, or set & get". Or "set if necessary, then get". It's especially efficient if your dictionary key is expensive to compute or long to type.
The only problem with dict.setdefault() is that the default value is always evaluated, whether needed or not. That only matters if the default value is expensive to compute.
If the default value is expensive to compute, you may want to use the defaultdict class, which we'll cover shortly.
Dictionary setdefault Method (2)
Here we see that the setdefault dictionary method can also be used as a stand-alone statement:
setdefault can also be used as a stand-alone statement:
navs = {} for (portfolio, equity, position) in data: navs.setdefault(portfolio, 0) navs[portfolio] += position * prices[equity]
The setdefault dictionary method returns the default value, but we ignore it here. We're taking advantage of setdefault's side effect, that it sets the dictionary value only if there is no value already.
defaultdict
New in Python 2.5.
defaultdict is new in Python 2.5, part of the collections module. defaultdict is identical to regular dictionaries, except for two things:
- it takes an extra first argument: a default factory function; and
- when a dictionary key is encountered for the first time, the default factory function is called and the result used to initialize the dictionary value.
There are two ways to get defaultdict:
* import the collections module and reference it via the module,
➔
* or import the defaultdict name directly:
➔
import collections
d = collections.defaultdict(...)
from collections import defaultdict
d = defaultdict(...)
Here's the example from earlier, where each dictionary value must be initialized to an empty list, rewritten as with defaultdict:
from collections import defaultdict
equities = defaultdict(list) for (portfolio, equity) in data: equities[portfolio].append(equity)
There's no fumbling around at all now. In this case, the default factory function is list, which returns an empty list.
This is how to get a dictionary with default values of 0: use int as a default factory function:
navs = defaultdict(int) for (portfolio, equity, position) in data: navs[portfolio] += position * prices[equity]
You should be careful with defaultdict though. You cannot get KeyError exceptions from properly initialized defaultdict instances. You have to use a "key in dict" conditional if you need to check for the existence of a specific key.
Building & Splitting Dictionaries
Here's a useful technique to build a dictionary from two lists (or sequences): one list of keys, another list of values.
given = ['John', 'Eric', 'Terry', 'Michael'] family = ['Cleese', 'Idle', 'Gilliam', 'Palin'] pythons = dict(zip(given, family)) >>> pprint.pprint(pythons) {'John': 'Cleese', 'Michael': 'Palin', 'Eric': 'Idle', 'Terry': 'Gilliam'}
The reverse, of course, is trivial:
>>> pythons.keys() ['John', 'Michael', 'Eric', 'Terry'] >>> pythons.values() ['Cleese', 'Palin', 'Idle', 'Gilliam']
Note that the order of the results of .keys() and .values() is different from the order of items when constructing the dictionary. The order going in is different from the order coming out. This is because a dictionary is inherently unordered. However, the order is guaranteed to be consistent (in other words, the order of keys will correspond to the order of values), as long as the dictionary isn't changed between calls.
Testing for Truth Values
# do this: # not this: if x: if x == True: pass pass
It's elegant and efficient to take advantage of the intrinsic truth values (or Boolean values) of Python objects.
Testing a list:
# do this: # not this: if items: if len(items) != 0: pass pass # and definitely not this: if items != []: pass
Truth Values
The True and False names are built-in instances of type bool, Boolean values. Like None, there is only one instance of each.
FalseTrue
False (== 0) True (== 1)
"" (empty string) any string but "" (" ", "anything")
0, 0.0
any number but 0 (1, 0.1, -1, 3.14)
[], (), {}, set()
any non-empty container ([0], (None,),[''])
Nonealmost any object that's not explicitly False
Example of an object's truth value:
>>> class C: ... pass ... >>> o = C() >>> bool(o) True >>> bool(C) True
(Examples: execute truth.py.)
To control the truth value of instances of a user-defined class, use the __nonzero__ or __len__ special methods. Use __len__ if your class is a container which has a length:
class MyContainer(object): def __init__(self, data): self.data = data def __len__(self): """Return my length.""" return len(self.data)
If your class is not a container, use __nonzero__:
class MyClass(object): def __init__(self, value): self.value = value def __nonzero__(self): """Return my truth value (True or False).""" # This could be arbitrarily complex: return bool(self.value)
In Python 3.0, __nonzero__ has been renamed to __bool__ for consistency with the bool built-in type. For compatibility, add this to the class definition:
__bool__ = __nonzero__
Index & Item (1)
Here's a cute way to save some typing if you need a list of words:
>>> items = 'zero one two three'.split() >>> print items ['zero', 'one', 'two', 'three']
Say we want to iterate over the items, and we need both the item's index and the item itself:
- or -
i = 0 for item in items: for i in range(len(items)): print i, item print i, items[i] i += 1
Index & Item (2): enumerate
The enumerate function takes a list and returns (index, item) pairs:
>>> print list(enumerate(items)) [(0, 'zero'), (1, 'one'), (2, 'two'), (3, 'three')]
We need use a list wrapper to print the result because enumerate is a lazy function: it generates one item, a pair, at a time, only when required. A for loop is one place that requires one result at a time. enumerate is an example of a generator, which we'll cover in greater detail later. print does not take one result at a time -- we want the entire result, so we have to explicitly convert the generator into a list when we print it.
Our loop becomes much simpler:
for (index, item) in enumerate(items): print index, item
# compare: # compare: index = 0 for i in range(len(items)): for item in items: print i, items[i] print index, item index += 1
The enumerate version is much shorter and simpler than the version on the left, and much easier to read and understand than either.
An example showing how the enumerate function actually returns an iterator (a generator is a kind of iterator):
>>> enumerate(items) <enumerate object at 0x011EA1C0> >>> e = enumerate(items) >>> e.next() (0, 'zero') >>> e.next() (1, 'one') >>> e.next() (2, 'two') >>> e.next() (3, 'three') >>> e.next() Traceback (most recent call last): File "<stdin>", line 1, in ? StopIteration
Default Parameter Values
This is a common mistake that beginners often make. Even more advanced programmers make this mistake if they don't understand Python names.
def bad_append(new_item, a_list=[]): a_list.append(new_item) return a_list
The problem here is that the default value of a_list, an empty list, is evaluated at function definition time. So every time you call the function, you get the same default value. Try it several times:
>>> print bad_append('one') ['one'] >>> print bad_append('two') ['one', 'two']
Lists are a mutable objects; you can change their contents. The correct way to get a default list (or dictionary, or set) is to create it at run time instead, inside the function:
def good_append(new_item, a_list=None): if a_list is None: a_list = [] a_list.append(new_item) return a_list
% String Formatting
Python's % operator works like C's sprintf function.
Although if you don't know C, that's not very helpful. Basically, you provide a template or format and interpolation values.
In this example, the template contains two conversion specifications: "%s" means "insert a string here", and "%i" means "convert an integer to a string and insert here". "%s" is particularly useful because it uses Python's built-in str() function to to convert any object to a string.
The interpolation values must match the template; we have two values here, a tuple.
name = 'David' messages = 3 text = ('Hello %s, you have %i messages' % (name, messages)) print text
Output:
Hello David, you have 3 messages
Details are in the Python Library Reference, section 2.3.6.2, "String Formatting Operations". Bookmark this one!
If you haven't done it already, go to python.org, download the HTML documentation (in a .zip file or a tarball), and install it on your machine. There's nothing like having the definitive resource at your fingertips.
Advanced % String Formatting
What many people don't realize is that there are other, more flexible ways to do string formatting:
By name with a dictionary:
values = {'name': name, 'messages': messages} print ('Hello %(name)s, you have %(messages)i ' 'messages' % values)
Here we specify the names of interpolation values, which are looked up in the supplied dictionary.
Notice any redundancy? The names "name" and "messages" are already defined in the local namespace. We can take advantage of this.
By name using the local namespace:
print ('Hello %(name)s, you have %(messages)i ' 'messages' % locals())
The locals() function returns a dictionary of all locally-available names.
This is very powerful. With this, you can do all the string formatting you want without having to worry about matching the interpolation values to the template.
But power can be dangerous. ("With great power comes great responsibility.") If you use the locals() form with an externally-supplied template string, you expose your entire local namespace to the caller. This is just something to keep in mind.
To examine your local namespace:
>>> from pprint import pprint >>> pprint(locals())
pprint is a very useful module. If you don't know it already, try playing with it. It makes debugging your data structures much easier!
Advanced % String Formatting
The namespace of an object's instance attributes is just a dictionary, self.__dict__.
By name using the instance namespace:
print ("We found %(error_count)d errors" % self.__dict__)
Equivalent to, but more flexible than:
print ("We found %d errors" % self.error_count)
Note: Class attributes are in the class __dict__. Namespace lookups are actually chained dictionary lookups.
List Comprehensions
List comprehensions ("listcomps" for short) are syntax shortcuts for this general pattern:
The traditional way, with for and if statements:
new_list = [] for item in a_list: if condition(item): new_list.append(fn(item))
As a list comprehension:
new_list = [fn(item) for item in a_list if condition(item)]
Listcomps are clear & concise, up to a point. You can have multiple for-loops and if-conditions in a listcomp, but beyond two or three total, or if the conditions are complex, I suggest that regular for loops should be used. Applying the Zen of Python, choose the more readable way.
For example, a list of the squares of 0–9:
>>> [n ** 2 for n in range(10)] [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
A list of the squares of odd 0–9:
>>> [n ** 2 for n in range(10) if n % 2] [1, 9, 25, 49, 81]
Generator Expressions (1)
Let's sum the squares of the numbers up to 100:
As a loop:
total = 0 for num in range(1, 101): total += num * num
We can use the sum function to quickly do the work for us, by building the appropriate sequence.
As a list comprehension:
total = sum([num * num for num in range(1, 101)])
As a generator expression:
total = sum(num * num for num in xrange(1, 101))
Generator expressions ("genexps") are just like list comprehensions, except that where listcomps are greedy, generator expressions are lazy. Listcomps compute the entire result list all at once, as a list. Generator expressions compute one value at a time, when needed, as individual values. This is especially useful for long sequences where the computed list is just an intermediate step and not the final result.
In this case, we're only interested in the sum; we don't need the intermediate list of squares. We use xrange for the same reason: it lazily produces values, one at a time.
Generator Expressions (2)
For example, if we were summing the squares of several billion integers, we'd run out of memory with list comprehensions, but generator expressions have no problem. This does take time, though!
total = sum(num * num for num in xrange(1, 1000000000))
The difference in syntax is that listcomps have square brackets, but generator expressions don't. Generator expressions sometimes do require enclosing parentheses though, so you should always use them.
Rule of thumb:
- Use a list comprehension when a computed list is the desired end result.
- Use a generator expression when the computed list is just an intermediate step.
Here's a recent example I saw at work.
➔
We needed a dictionary mapping month numbers (both as string and as integers) to month codes for futures contracts. It can be done in one logical line of code.
➔
The way this works is as follows:
* The dict() built-in takes a list of key/value pairs (2-tuples).
* We have a list of month codes (each month code is a single letter, and a string is also just a list of letters). We enumerate over this list to get both the month code and the index.
* The month numbers start at 1, but Python starts indexing at 0, so the month number is one more than the index.
* We want to look up months both as strings and as integers. We can use the int() and str() functions to do this for us, and loop over them.
Recent example:
month_codes = dict((fn(i+1), code) for i, code in enumerate('FGHJKMNQUVXZ') for fn in (int, str))
month_codes result:
{ 1: 'F', 2: 'G', 3: 'H', 4: 'J', ...
'1': 'F', '2': 'G', '3': 'H', '4': 'J', …}
Sorting
It's easy to sort a list in Python:
a_list.sort()
(Note that the list is sorted in-place: the original list is sorted, and the sort method does not return the list or a copy.)
But what if you have a list of data that you need to sort, but it doesn't sort naturally (i.e., sort on the first column, then the second column, etc.)? You may need to sort on the second column first, then the fourth column.
We can use list's built-in sort method with a custom function:
def custom_cmp(item1, item2): return cmp((item1[1], item1[3]), (item2[1], item2[3])) a_list.sort(custom_cmp)
This works, but it's extremely slow for large lists.
Sorting with DSU *
DSU = Decorate-Sort-Undecorate
* Note: DSU is often no longer necessary. See the next section, Sorting With Keys for the new approach.
Instead of creating a custom comparison function, we create an auxiliary list that will sort naturally:
# Decorate:
to_sort = [(item[1], item[3], item) for item in a_list]
# Sort:
to_sort.sort()
# Undecorate:
a_list = [item[-1] for item in to_sort]
The first line creates a list containing tuples: copies of the sort terms in priority order, followed by the complete data record.
The second line does a native Python sort, which is very fast and efficient.
The third line retrieves the last value from the sorted list. Remember, this last value is the complete data record. We're throwing away the sort terms, which have done their job and are no longer needed.
This is a tradeoff of space and complexity against time. Much simpler and faster, but we do need to duplicate the original list.
Sorting With Keys
Python 2.4 introduced an optional argument to the sort list method, "key", which specifies a function of one argument that is used to compute a comparison key from each list element. For example:
def my_key(item): return (item[1], item[3]) to_sort.sort(key=my_key)
The function my_key will be called once for each item in the to_sort list.
You can make your own key function, or use any existing one-argument function if applicable:
* str.lower to sort alphabetically regarless of case.
* len to sort on the length of the items (strings or containers).
* int or float to sort numerically, as with numeric strings like "2", "123", "35".
Generators
We've already seen generator expressions. We can devise our own arbitrarily complex generators, as functions:
def my_range_generator(stop): value = 0 while value < stop: yield value value += 1 for i in my_range_generator(10): do_something(i)
The yield keyword turns a function into a generator. When you call a generator function, instead of running the code immediately Python returns a generator object, which is an iterator; it has a next method. for loops just call the next method on the iterator, until a StopIteration exception is raised. You can raise StopIteration explicitly, or implicitly by falling off the end of the generator code as above.
Generators can simplify sequence/iterator handling, because we don't need to build concrete lists; just compute one value at a time. The generator function maintains state.
This is how a for loop really works. Python looks at the sequence supplied after the in keyword. If it's a simple container (such as a list, tuple, dictionary, set, or user-defined container) Python converts it into an iterator. If it's already an iterator, Python uses it directly.
Then Python repeatedly calls the iterator's next method, assigns the return value to the loop counter (i in this case), and executes the indented code. This is repeated over and over, until StopIteration is raised, or a breakstatement is executed in the code.
A for loop can have an else clause, whose code is executed after the iterator runs dry, but not after a break statement is executed. This distinction allows for some elegant uses. else clauses are not always or often used onfor loops, but they can come in handy. Sometimes an else clause perfectly expresses the logic you need.
For example, if we need to check that a condition holds on some item, any item, in a sequence:
for item in sequence: if condition(item): break else: raise Exception('Condition not satisfied.')
Example Generator
Filter out blank rows from a CSV reader (or items from a list):
def filter_rows(row_iterator): for row in row_iterator: if row: yield row data_file = open(path, 'rb') irows = filter_rows(csv.reader(data_file))
Reading Lines From Text/Data Files
datafile = open('datafile') for line in datafile: do_something(line)
This is possible because files support a next method, as do other iterators: lists, tuples, dictionaries (for their keys), generators.
There is a caveat here: because of the way the buffering is done, you cannot mix .next & .read* methods unless you're using Python 2.5+.
EAFP vs. LBYL
It's easier to ask forgiveness than permission
Look before you leap
Generally EAFP is preferred, but not always.
* Duck typing
If it walks like a duck, and talks like a duck, and looks like a duck: it's a duck. (Goose? Close enough.)
* Exceptions
Use coercion if an object must be a particular type. If x must be a string for your code to work, why not call
str(x)
instead of trying something like
isinstance(x, str)
EAFP try/except Example
You can wrap exception-prone code in a try/except block to catch the errors, and you will probably end up with a solution that's much more general than if you had tried to anticipate every possibility.
try: return str(x) except TypeError: ...
Note: Always specify the exceptions to catch. Never use bare except clauses. Bare except clauses will catch unexpected exceptions, making your code exceedingly difficult to debug.
Importing
from module import *
You've probably seen this "wild card" form of the import statement. You may even like it. Don't use it.
To adapt a well-known exchange:
(Exterior Dagobah, jungle, swamp, and mist.)
LUKE: Is from module import * better than explicit imports?
YODA: No, not better. Quicker, easier, more seductive.
LUKE: But how will I know why explicit imports are better than the wild-card form?
YODA: Know you will when your code you try to read six months from now.
Wild-card imports are from the dark side of Python.
Never!
The from module import * wild-card style leads to namespace pollution. You'll get things in your local namespace that you didn't expect to get. You may see imported names obscuring module-defined local names. You won't be able to figure out where certain names come from. Although a convenient shortcut, this should not be in production code.
Moral: don't use wild-card imports!
➔
It's much better to:
* reference names through their module (fully qualified identifiers),
➔
* import a long module using a shorter name (alias; recommended),
➔
* or explicitly import just the names you need.
➔
Namespace pollution alert!
Instead,
Reference names through their module (fully qualified identifiers):
import module
module.name
Or import a long module using a shorter name (alias):
import long_module_name as mod
mod.name
Or explicitly import just the names you need:
from module import name
name
Note that this form doesn't lend itself to use in the interactive interpreter, where you may want to edit and "reload()" a module.
Modules & Scripts
To make a simultaneously importable module and executable script:
if __name__ == '__main__': # script code here
When imported, a module's __name__ attribute is set to the module's file name, without ".py". So the code guarded by the if statement above will not run when imported. When executed as a script though, the __name__attribute is set to "__main__", and the script code will run.
Except for special cases, you shouldn't put any major executable code at the top-level. Put code in functions, classes, methods, and guard it with if __name__ == '__main__'.
Module Structure
"""module docstring"""
# imports
# constants
# exception classes
# interface functions
# classes
# internal functions & classes
def main(...):
...
if __name__ == '__main__':
status = main()
sys.exit(status)
This is how a module should be structured.
Command-Line Processing
Example: cmdline.py:
#!/usr/bin/env python """ Module docstring. """ import sys import optparse def process_command_line(argv): """ Return a 2-tuple: (settings object, args list). `argv` is a list of arguments, or `None` for ``sys.argv[1:]``. """ if argv is None: argv = sys.argv[1:] # initialize the parser object: parser = optparse.OptionParser( formatter=optparse.TitledHelpFormatter(width=78), add_help_option=None) # define options here: parser.add_option( # customized description; put --help last '-h', '--help', action='help', help='Show this help message and exit.') settings, args = parser.parse_args(argv) # check number of arguments, verify values, etc.: if args: parser.error('program takes no command-line arguments; ' '"%s" ignored.' % (args,)) # further process settings & args if necessary return settings, args def main(argv=None): settings, args = process_command_line(argv) # application code here, like: # run(settings, args) return 0 # success if __name__ == '__main__': status = main() sys.exit(status)
Packages
package/
__init__.py
module1.py
subpackage/
__init__.py
module2.py
* Used to organize your project.
* Reduces entries in load-path.
* Reduces import name conflicts.
Example:
import package.module1
from package.subpackage import module2
from package.subpackage.module2 import name
In Python 2.5 we now have absolute and relative imports via a future import:
from __future__ import absolute_import
I haven't delved into these myself yet, so we'll conveniently cut this discussion short.
Simple is Better Than Complex
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
—Brian W. Kernighan, co-author ofThe C Programming Languageand the "K" in "AWK"
In other words, keep your programs simple!
Don't reinvent the wheelBefore writing any code,
➔ ➔ ➔ ➔
* Check Python's standard library.
* Check the Python Package Index (the "Cheese Shop"):
http://cheeseshop.python.org/pypi
* Search the web. Google is your friend.
原文链接:
http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html
发表评论
-
macos 10.9.2 clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command
2014-03-25 19:13 1759方法总是有的,当然需要你去寻找。 当然如果花费太多的时间在一件 ... -
PostgreSQL psycopg2:IndexError: tuple index out of range
2014-01-09 17:04 2230Postgresql psycopg2使用like查询的时候 ... -
Python 迭代器和生成器
2013-10-15 23:09 2849迭代器 迭代器只不过是一个实现迭代器协议的容器对象。它基于两个 ... -
Python时间模块
2013-10-15 23:03 3469time模块 时间模块中最常用的一个函数就是获取当前时间的函数 ... -
Python装饰器
2013-10-15 22:59 1568编写自定义装饰器有许多方法,但最简单和最容易理解的方法是编写一 ... -
python list
2013-10-15 22:56 1254简单总结以及整理如下: >>> dir( ... -
Python Excel
2013-09-10 17:21 975安装lib easy_install xlrd def ... -
排序算法学习(python版本)之堆排序(HeapSort)
2013-07-01 22:54 1996Contains: 堆排序以及堆排序的应用 堆排序(Heaps ... -
python range xrange
2013-06-25 23:30 1148引用Help on built-in function ran ... -
python class
2013-06-25 00:54 1828引用类是创建新对象类 ... -
AttributeError: 'module' object has no attribute 'SendCloud'
2013-06-05 11:46 7083网上查了下 意思是说你命名的文件名不能和lib重名,这样会导 ... -
python string
2013-05-07 23:44 2198如果这就是字符串,这本来就是字符串 首先看下字符串的方法 ... -
Python property
2013-03-29 19:56 0由于之前有总结过,可以参考http://2057.iteye. ... -
python tips
2013-03-28 23:57 8831、enum #!/usr/bin/env python ... -
python decorators
2013-03-28 23:36 1365Contains: 1、decorators 2、funct ... -
python closures
2013-03-28 22:09 1190Closure:如果在一个内部函数里,对在外部作用域(但不是在 ... -
Python map、filter,reduce介绍
2013-03-28 22:02 13091、filter(function,iterable) 引用C ... -
Python __new__ 、__init__、 __call__
2013-03-26 23:49 5351Contains: __new__: 创建对象时调用,返回当 ... -
Python socket简介
2013-03-25 23:42 2168自豪地使用dir和help. Python 2.7.2 ( ... -
Tornado ioloop源码简析
2013-03-21 00:18 2849#!/usr/bin/env python #-*-en ...
相关推荐
These are the guiding principles of Python, but are open to interpretation. A sense of humor is required for their proper interpretation.
在深入探讨David Goodger的教程“像Python主义者一样编程:惯用的Python”之前,让我们先理解何为“Pythonista”。Pythonista指的是那些熟练掌握Python编程语言并能以优雅、高效的方式编写代码的专业人士。David ...
LastPass Pythonista 原文: (适用于iOS - ) 使用 Pythonista 为 LastPass 提供伪 x-callback-url 功能。 它会下载您的 LastPass 保管库并将其存储在。 然后,您可以使用 Pythonista 强大的脚本以及等应用程序,让...
Pythonista 是一个对 Python 程序员的称呼,它涵盖了使用 Python 进行各种工作的专业人士。Python 语言因其简洁明了的语法和强大的功能,被广泛应用于许多领域。在这个主题下,我们将深入探讨 Python 在不同场景下的...
Pythonista是一款专为iOS设备设计的强大的Python编程环境,它让开发者和爱好者可以在移动设备上编写、运行Python代码。这款应用极大地扩展了iOS设备的功能,不仅适合初学者学习Python编程,也深受经验丰富的程序员的...
"way-to-pythonista:2015年秋季GoogleCamp Python和Web编程教程文档和代码存储库" 这个标题表明这是一个关于Python和Web编程的教程资源,具体是GoogleCamp在2015年秋季举办的一次活动。"way-to-pythonista"可能是指...
Pythonista 是一款专为iOS设备设计的强大的Python编程环境,它让移动设备用户可以在iPhone或iPad上编写、运行和测试Python脚本。这个"Pythonista 播放日志"可能是一个项目或教程集合,记录了用户在使用Pythonista...
Pythonista 是一款为 iOS 设备设计的强大的 Python 开发环境,它允许用户在移动设备上编写、运行和测试 Python 脚本。"ed-pythonista" 指的可能是一个针对 Pythonista 的文本编辑器解决方案,它可能是由社区开发者...
一个脚本,当运行时,将获取剪贴板上的图像,将其上传到 Imgur,然后自动将图像的直接链接复制到剪贴板。 这使得通过任何消息服务共享图像变得容易。 使用 Pythonista Homescreen Shortcut Maker ( ) 使其成为一个...
"Pythonista"是Python爱好者或专业人士的别称,他们在社区中经常使用一些专业术语和"黑话",这些术语可能对新入门的学习者构成挑战。"Python Learners Glossary"就是为了解决这个问题,它提供了对这些专业词汇的详细...
标题中的"Python库 | pythonista-wkwebview-1.0.tar.gz"表明这是一个与Python相关的库,且已经打包成`.tar.gz`格式的压缩文件。`.tar.gz`是Linux和Unix环境中常用的归档和压缩格式,它首先使用`tar`命令将多个文件或...
XKCD-viewer项目使用Python编程语言编写,使得Pythonista可以直接在他们的设备上享受阅读XKCD的乐趣。 **Pythonista与Python编程** Pythonista是一款iOS设备上的集成开发环境(IDE),它为移动设备带来了完整的...
Pythonista是一款专为iOS设备设计的强大Python编程环境,它允许用户在移动设备上编写、运行Python脚本,并且具有丰富的内置库和交互性。手势支持是Pythonista的一个重要特性,它使得用户可以通过简单的触摸和滑动...
在编程世界中,Pythonista指的是熟练掌握Python编程语言的开发者,而Rustacean则是对Rust编程语言爱好者或专家的昵称。Python以其简洁易读的语法和强大的库支持赢得了广大开发者的心,而Rust则以其内存安全和高性能...
资源分类:Python库 所属语言:Python 使用前提:需要解压 资源全名:pythonp-0.3.0-py3.6.egg 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
如果`pythonista`部分是指iOS上的Python环境Pythonista,那么这个库可能特别适用于在移动设备上进行Python开发,提供了一种在iOS设备之间共享数据或执行协作操作的方法。 为了深入理解和使用`pythonista-multipeer`...
【Pythonista】是一个强大的iOS平台上的Python编程环境,它允许用户编写、运行和测试Python代码。这个"pythonista-scripts"项目则是一个集合,包含了作者在使用Pythonista时编写的各种脚本,可能涵盖自动化任务、...
标题 "pythonista" 暗示我们关注的主题是与 Python 编程相关的,可能是关于 Python 开发者或者 Python 在移动设备上的应用。描述中的 "pythonista" 同样指向 Python 程序员或者对 Python 有深入研究的爱好者。标签 ...
你好呀 :waving_hand: ,我叫玛古罗 :sushi:我是NOOB Pythonista :snake: 我经常使用Python为各种服务创建机器人。 例如:Discord,Twitch,LINE,Twitter 我最近一直在致力于验证云服务器上的各种流协议。 我使用...
pythonista-minesweeper 适用于iOS的Python版Minesweeper游戏(需要 )。 使用源代码中Base64编码的外观。 参见: :