logo
首页技术栈工具库讨论
list-remote-forwards
list-remote-forwards
Usage: List all remote forwards for mail accounts stored in a SQL database. A list of local domains is supplied to the program (more or less) through the --domain-query option. Any addresses which forward to another address not contained in this list of local domains is considered a remote forward. Remote forwards can cause problems for a number of reasons, the most common of which are, If the sender has an SPF record authorizing his mail server to send mail on his behalf, then when his message is forwarded by the recipient's mail server, it will fail any subsequent SPF checks. This will likely cause the message to be rejected, and the original recipient's server will generate backscatter. If any spam makes it through the filter on the recipient's mail server, that spam will then be forwarded to a remote destination. The remote destination will blame the forwarding server for the spam, and that can impact its reputation and potentially lead to a blacklisting even though the spam did not originate on the recipient's server. Whether or not these are an issue depends on the circumstances, but in any case it is useful to know who is forwarding mail off-site. Input: None. Output: A list of addresses that are forwarded to remote domains. Options: The name of the database (or file, if SQLite) to which we should connect. Default: The name of the current user (Postgres only). --domain-query SQL query used to produce a list of local domains. This should return the set of all domains (i.e one column) that are local to the server. See the default value for an example. Default: "SELECT domain FROM domain WHERE domain ALL ORDER BY domain;" The name of a mail exchanger, the forwards of whose domains we should ignore. For example, if one mail exchanger, mx1.example.com, has strict spam filtering, it may be acceptable to have remote forwarding for domains that have mx1.example.com as their sole mail exchanger (MX record). In that case, you might want to exclude those domains from the report by naming mx1.example.com here. A forward will be excluded from the report only if all of its MX records are contained in the given exclude list. This option can be repeated to add mail exchangers to the exclude list. Default: [] (empty) SQL query used to produce a list of all forwards on the mail system. This query should return the set of all (address, goto) triples, where "goto" is the destination address; i.e. to where the "address" forwards. The "goto" field may contain more than one email address, separated by commas. Default: "SELECT address,goto FROM alias ORDER BY address;" Hostname where the database is located (Postgres-only). Default: None, a UNIX domain socket connection is attempted (Postgres only) Password used to connect to the database (Postgres-only). Default: None (assumes passwordless authentication) --port Port number used to connect to the database (Postgres-only). Default: None, a UNIX domain socket connection is attempted (Postgres only) Username used to connect to the database (Postgres-only). Default: The current user Examples: $ list-remote-forwards --database=testfixturespostfixadmin.sqlite3 user1example.com -> user1example.net user2example.com -> user1example.org user2example.com -> user2example.org user2example.com -> user3example.org user7example.com -> user8example.net
hat
hat
hat-trans transforms Module.hs into Hat/Module.hs such that when the program is executed, a trace file Programname.hat is generated. Various tools (hat-trail, hat-observe, hat-explore ...) then allow viewing the trace file in different ways, to locate a bug or understand how a program works. Hat 2.8 uses the haskell-src-exts parser and other Hackage libraries to reduce its own size and simplify extensions. Module imports and exports have also been improved to handle nearly all kinds of renaming, hiding, etc. Hat 2.8 works only with ghc for Haskell 98 (plus a few extensions) and the standard Haskell 98 libraries plus some Haskell 2010 libraries. Although it should build on any operating system, most viewing tools use ASCII console escape sequences and they open xterms; hence they will only work well under Unix and X11. Tested on MacOS X 10.8.4. Installation: > cabal -v install Flag -v allows you to see what is going on. Building takes a long time (one module has 25.000 lines of code). Don't worry about numerous warning messages. Use: > hat-make MyProgram.hs transforms and compiles all modules of your program and produces the tracing version Hat/MyProgram. Run your program > Hat/MyProgram which will produce trace files MyProgram.hat* Use the viewing tools to explore the trace: > hat-trail / hat-observe / hat-explore /... MyProgram There is documentation in the "docs" folder, but it is partially outdated. There are a few small programs for exploring tracing in the "examples" folder.
PCLT
PCLT
Deprecated "PCLT" is an abbreviation for "Parametric Composable Localizable Templates" - in fact it should also hold Detalizable. Term "Detailizable content (message)" in this package has a following meaning: some content, representing which it is possible to regulate, in how much details it is represented. Conceptually, this package is a powerful extension to the well known Show class, that (extension) is thought to be embeded in any Haskell program, which requires multilanguage support, and/or where messages should be detailizable. The PCLT catalog consists of: Catalog ID Config, which defines rules and constraints, that are used when catalog gets formed, and when it is used to generate messages. Tempates of messages, possibly related. Each catalog entry is referenced by an ID, and is called "localizable (in languages) template", while it's localization in concrete language is called "localized (in a languages) template". Each catalog entry (localizable template) consists of ID A requirement to detailization level (to used by reciever of representation), which if is not statisfied, the template isn't used. A map of localized templates by languages - the different translations of one same message. Each localized template is a sequence of chunks: plain texts, named parameters placeholders, placeholders for insertion of other template (reference on other template, also called composites). To make a message one needs to specify a catalog, a language, detailization level to orient on a localizable template ID, a map of parameters together with their values. The last two fields (localizable template ID + a map of parameters together with their values) is called instaniation (of templated message). In the program it wrapped in a PCSI data type. A parameter value may be plain text newline nothing a reference to other instaniation (localizable template ID + a map of parameters together with their values) a list of other other instaniation a parameter wrapping, where wrapper is an indentation of text (N whitespaces insertion after each newline character) a list of parameter values. This package comes together with another one - a dependent package PCLT-DB (section Database), which provides a (PostgreSQL 8.4) DB structure where to keep/manage data used for PCLT catalogs formation, simple interface to read this data in haskell program, and a prototype of a longterm service which regularly updates catalog MVar with data from DB. IMPORTANT!!! : It is highly recommended to use ISO 639(3) standard for language names, since PCLT-DB package is oriented on 3 letters (not bigger) language names. Without modifications PCLT-DB won't work for bigger (then 3-letters) names. ToDo considerations for the next versions: Consider using state monad in order to put catalog into an implicit context... and maybe also other uses if monads. Rewrite MakeCatalog, so that it uses CatalogMaths functions. Otherwise the both modules do similar work, and keeping double code is a bad style. Introduction of a new parameter value type: "reparsable" wrapping for parameter value. CatalogMaths extension. Enchance required SDLs with min & max functions. Sacrificing ldtSubcompositesMap of LocalizedTemplate, which initially was introduced to enchance speed of messages generation, in favor of making catalogs easier to modify (which curently isn't supported and isn't an easy task).
yoko
yoko
Based off of the paper "A Pattern for Almost Homomorphic Functions" at http://www.ittc.ku.edu/~nfrisby/frisby-wgp-2012.pdf, presented at the Workshop on Generic Programming 2012. Also, my dissertation http://www.ittc.ku.edu/~nfrisby/frisby-dissertation.pdf yoko views a nominal datatype as a band of constructors, each a nominal type in its own right. Such datatypes can be disbanded via the disband function into an anonymous sum of nominal constructors, and vice versa via the band function. This library uses extensive type-level programming to enrich its instant-generics foundation with capabilities derived from the constructor-centric perspective. For example, consider the following nominal datatype. This type can of course be understood as a sum of the individual fields types. yoko's conceptual foundations start there. In particular, this allows a constructor, say John, to be used independently of its original range type and sibling constructors. As a generic programming library, yoko extends instant-generics with support for constructor-centric generic programming. The Examples/LambdaLift/LambdaLift.hs file distributed with the yoko source demonstrates defining a lambda-lifting conversion between the two types ULC, which has lambdas, and Prog, which has top-level function declarations instead. These types are defined in separate modules, since they have constructors with the same name. Indeed, the fact that they having matching constructors named App is crucial for yoko's automatic conversion from ULC's App to TLF's App. As written, the generic lambda-lifter would continue to work for any new ULC constructors (e.g. syntax for tuples or mutable references) as long as constructors with the same names and analogous fields were added to TLF and the semantics of those constructors doesn't involve binding. This default generic behavior of the lambda-lifter is specified in about ten lines of user code. The non-generic code is much more complicated. This is intentional: I wanted to show that sometimes shoehorning an algorithm into the requisite type (ie a -> m a') can be difficult and require subtleties like backwards state. Existing generic libraries don't use constructor names to the degree that yoko does, and so cannot accomodate generic conversions as well.