Cisco/Python: Backup der Konfiguration bei write Event auf externen Server

Diverse Cisco Geräte können bei einem write Event die Konfiguration an einen anderen Server z.B. über HTTP pushen.

Cisco Config:

archive
 path http://1.2.3.4/cisco_config/put/$h-$t
 write-memory

Apache /etc/httpd/conf.d/zzz_cisco_backup.conf:

WSGIDaemonProcess cisco_backup user=apache group=apache threads=10
WSGIPythonPath /opt/cisco_backup/web_root
WSGIScriptAlias /cisco_backup /opt/cisco_backup/web_root/cisco_backup.wsgi

<Directory /opt/cisco_backup/web_root>
WSGIProcessGroup cisco_backup
WSGIApplicationGroup %{GLOBAL}
WSGIScriptReloading On
Order deny,allow
Allow from all

<Files cisco_backup.py>
Require all granted
</Files>
<Files cisco_backup.wsgi>
Require all granted
</Files>

</Directory>

cisco_backup.wsgi File:

import sys

sys.path.append("/opt/cisco_backup/web_root")

from cisco_backup import app as application

cisco_backup.py File:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from flask import Flask
from flask import request

app = Flask(__name__)

@app.route("/put/<cfg>", methods=['PUT'])
def get_config(cfg):
   with open('/opt/cisco_config/incoming_configs/%s' % cfg, "wb") as f:
      f.write(request.data)
   return "ok"

if __name__ == "__main__":
    app.run()

Viel Spaß 😉

Python: Snippet – E-Mail versenden, alternative zu Mailer

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import smtplib
from email.mime.text import MIMEText


def postmaster(mfrom, mto, msubject, message, smtphost):

    msg = MIMEText(message.encode("utf-8"))
    msg['Subject'] = msubject
    msg['From'] = mfrom
    msg['To'] = mto

    s = smtplib.SMTP(smtphost)
    s.sendmail(msg['From'], msg['To'], msg.as_string())
    s.quit()

 

Python: Snippet: SSH shell on Cisco devices

Mit dem Snippet können Kommandos auf einer Cisco Shell via SSH ausgeführt werden.

#!/usr/bin/env python

import paramiko
import sys


def send_string_and_wait_for_string(command, wait_string, should_print):
    shell.send(command)
  
    receive_buffer = ""

    while not wait_string in receive_buffer:
        receive_buffer += shell.recv(1024)

    if should_print:
        print receive_buffer

    return receive_buffer

client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect("10.62.62.10", username="testuser", password="testpasswd", look_for_keys=False, allow_agent=False)

shell = client.invoke_shell()
send_string_and_wait_for_string("", "#", False)
send_string_and_wait_for_string("terminal length 0\n", "#", False)
output=send_string_and_wait_for_string("show logging\n", "#", False)
print output
client.close()

Mehr Infos / Quelle: http://blog.timmattison.com/archives/2014/06/25/automating-cisco-switch-interactions/

Check_MK: Problem mit Apache HTTP Proxy – SELinux blockt Reverse Proxy Verbindung zur Check_MK Instanz

Habe gerade auf ein frisch installiertes CentOS 7.4 Check_MK 1.4.0p19 installiert. Nach dem Start einer OMD Instanz kommt nur die Fehlermeldung:

OMD: Site Not Started

You need to start this site in order to access the web interface.

Im Apache Log ist folgendes zu sehen:

[Mon Dec 04 08:50:48.097245 2017] [proxy_http:error] [pid 20887] [client x.x.x.x:31372] AH01114: HTTP: failed to make connection to backend: 127.0.0.1, referer: http://server.example.net/extern/
[Mon Dec 04 08:50:56.943253 2017] [proxy:error] [pid 20883] (13)Permission denied: AH00957: HTTP: attempt to connect to 127.0.0.1:5000 (127.0.0.1) failed
[Mon Dec 04 08:50:56.943276 2017] [proxy:error] [pid 20883] AH00959: ap_proxy_connect_backend disabling worker for (127.0.0.1) for 0s
[Mon Dec 04 08:50:56.943280 2017] [proxy_http:error] [pid 20883] [client x.x.x.x:31408] AH01114: HTTP: failed to make connection to backend: 127.0.0.1

netstat – tulpen zeigt aber das das Backend läuft:

# netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name    
...
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN      997        71127      21004/httpd         
...

Ein Blick in das Audit Log verrät das SELinux zuschägt:

#tail -f /var/log/audit/audit.log
...
type=AVC msg=audit(1512377448.096:3647): avc:  denied  { name_connect } for  pid=20887 comm="httpd" dest=5000 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:commplex_main_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1512377448.096:3647): arch=c000003e syscall=42 success=no exit=-13 a0=a a1=559b02b25650 a2=10 a3=7fffb621631c items=0 ppid=20882 pid=20887 auid=4294967295 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=4294967295 comm="httpd" exe="/usr/sbin/httpd" subj=system_u:system_r:httpd_t:s0 key=(null)
type=AVC msg=audit(1512377508.204:3689): avc:  denied  { name_connect } for  pid=21020 comm="httpd" dest=5000 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:commplex_main_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1512377508.204:3689): arch=c000003e syscall=42 success=no exit=-13 a0=a a1=559b02b25650 a2=10 a3=7fffb621633c items=0 ppid=20882 pid=21020 auid=4294967295 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=4294967295 comm="httpd" exe="/usr/sbin/httpd" subj=system_u:system_r:httpd_t:s0 key=(null)
...

Das Problem kann temporär zum testen wie folgt gelöst werden:

/usr/sbin/setsebool httpd_can_network_connect 1

um die Änderung permanent zu übernehmen:

/usr/sbin/setsebool -P httpd_can_network_connect 1

 

Python: Snippet/Experiment – Syslog Server mit globalen und Host Filtern

Der Code ist nicht fertig und war mal ein Labor Versuch. Es lassen sich globale und Host Filter setzen wo diese zutreffen werden die Logs in ein extra File geschrieben.

Config file:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# Config definition


class CFG:

    def __init__(self):

        # Path for logfiles
        self.syslogpath = "/home/mthoma/_dev/syslog/log/"

        # Listner Port
        self.port = 3702

        # Listner address
        self.host = "0.0.0.0"

        # Global Filter
        self.global_filter = {
            "filter": [
                ".*FOOBAR.*",
                ".*COFFEE.*"
            ]
        }

        # Host Filter
        self.host_filter = {
            "10.201.11.33": {
                "filter": [
                    ".*MACFLAP.*",
                    ".*BUBU.*",
                ]
            },
        }

Syslog Server:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# Load config class
from config import CFG

# Load common classes
import re
import logging
import SocketServer
import socket
import os

# Load configuration file
C = CFG()

formatter = logging.Formatter('%(message)s')

def setup_logger(name, log_file, level=logging.INFO):
    handler = logging.FileHandler(log_file)
    handler.setFormatter(formatter)
    
    logger = logging.getLogger(name)
    logger.setLevel(level)
    logger.addHandler(handler)
    
    return logger


class SyslogUDPHandler(SocketServer.BaseRequestHandler):

    def handle(self):
        data = bytes.decode(self.request[0].strip())
        sockets = self.request[1]

        ip = str(self.client_address[0])
        
    # Try to resolve reverse record via DNS
        try:
            name, alias, addresslist = socket.gethostbyaddr(ip)
        except:
            name = ip
        
    # Set path
        path = C.syslogpath+name+"/"
        
    # Create path if not exist
        try:
            os.stat(path)
        except:
            os.mkdir(path)
        
        logger = setup_logger('normal_log', path+"log")
        logger.info(str(data))
        
        logger_sp = setup_logger('special_log', path+"spec")
        
        if ip in C.host_filter:
            filters = options['filter'] + C.global_filter['filter']
            filter_join = "|".join(filters)
            
            if re.match(r"%s" % filter_join, str(data)):
                logger_sp.info(str(data))
        else:
            filters = C.global_filter['filter']
            filter_join = "|".join(filters)
            
            if re.match(r"%s" % filter_join, str(data)):
                logger_sp.info(str(data))
                
        
        
        print "%s : " % self.client_address[0], str(data)

        logging.info(str(data))



if __name__ == "__main__":

    try:
        server = SocketServer.UDPServer((C.host,C.port), SyslogUDPHandler)
        server.serve_forever(poll_interval=0.5)

    except (IOError, SystemExit):
        raise

    except KeyboardInterrupt:
        print "Crtl+C Pressed. Shutting down."

 

Python: Snippet Multiprocessing mit Ergebnis

Beispiel für Parallelisierung von Jobs mit Ergebnis welche als Liste zurückgegeben werden.

#!/usr/bin/env python
# -*- encoding: utf-8; py-indent-offset: 4 -*-

import os
from multiprocessing import Pool


def worker(job):
    x, y = job
    result = x ** y
    return os.getpid(), result
  
if __name__ == '__main__':
    jobs = [(1, 2), (3, 4), (5, 6), (11, 12), (13, 14), (15, 16), (21, 22), (23, 24), (25, 26)]
    
    result_buffer = []
  
    pool = Pool(processes=5)
    
    for job in jobs:
        result_buffer.append(pool.apply_async(worker, args=(job,)))
    
    pool.close()
    pool.join()
  
    results = [r.get() for r in result_buffer]

    print results
  
    for pid, result in results:
        print "working pid was: %s" % pid
        print "result is: %s" % result
        print "---"

Beispiel Ergebnis:

$python mp_with_result.py

[(7992, 1), (7992, 81), (7992, 15625), (7992, 3138428376721L), (7992, 3937376385699289L), (7992, 6568408355712890625L), (7992, 122694327386105632949003612841L), (7992, 480250763996501976790165756943041L), (7992, 2220446049250313080847263336181640625L)]
working pid was: 7992
result is: 1
---
working pid was: 7992
result is: 81
---
working pid was: 7992
result is: 15625
---
working pid was: 7992
result is: 3138428376721
---
working pid was: 7992
result is: 3937376385699289
---
working pid was: 7992
result is: 6568408355712890625
---
working pid was: 7992
result is: 122694327386105632949003612841
---
working pid was: 7992
result is: 480250763996501976790165756943041
---
working pid was: 7992
result is: 2220446049250313080847263336181640625
---

 

Python: Experiment/Snippet – Komprimieren und löschen von Logfiles nach X Tagen

Ein Ansatz für Logverzeichnisse im Format /log/<yyyy>/<mm>/<dd>/<div. logsfiles>

#!/usr/bin/env python

import gzip
import shutil
import os
import datetime
import time

#############################################
# Config
#############################################

# Path of Logfiles
# Structure is /opt/log/<YYYY>/<MM>/<DD>/
gpath='/opt/log/'

# hold logs for x days
hold_time=180



#############################################

def get_immediate_subdirectories(a_dir):
    return [name for name in os.listdir(a_dir) if os.path.isdir(os.path.join(a_dir, name))]

def delete_files(f):
    # delete file if older than hold time
    nowx = time.time()

    for file in os.listdir(f):    
        if os.stat(f+file).st_mtime < nowx - hold_time * 86400:
            f_path = f+file
            print "delete %s " % f_path
            os.remove(f_path)
            
    try:
        os.rmdir(f)
    except:
        pass
            
    
    
def compress_files(lpath):
    # Compress files
    print "Working on: " + lpath
    obj = os.listdir(lpath)
    for f in obj:
        if os.path.isfile(lpath+f) and ".gz" not in f:
            with open(lpath+f,'rb') as f_in:
                with gzip.open(lpath+f+".gz",'wb') as f_out:
                    shutil.copyfileobj(f_in, f_out)
                    os.remove(lpath+f)
            
#compress everything which ist older than now

now = datetime.datetime.now()
years = get_immediate_subdirectories(gpath)

for year in years:

    # delete empty directories
    if not os.listdir(gpath+year):
        os.rmdir(gpath+year)
    else:

        months = get_immediate_subdirectories(gpath+year)
    
        for month in months:

            # delete empty directories
            if not os.listdir(gpath+year+"/"+month):
                os.rmdir(gpath+year+"/"+month)

            else:
                days = get_immediate_subdirectories(gpath+year+"/"+month)
                        
                # Remove current day from compressing & cleaning
                if month == str(now.month) and year == str(now.year):

                    if len(str(now.day)):
                        now_day = "0%s" % now.day
                    else:
                        now_day = str(now.day)

                    days.remove(now_day)
        
                for day in days:
                    # delete empty directories
                    if not os.listdir(gpath+year+"/"+month+"/"+day+"/"):
                        os.rmdir(gpath+year+"/"+month+"/"+day+"/")
                    else:
                        # compress all files in folder
                        compress_files(gpath+year+"/"+month+"/"+day+"/")
                        
                        # delete old files
                        delete_files(gpath+year+"/"+month+"/"+day+"/")

 

Python: Snippet – Kaputten UTF-8 String reparieren

Ich habe aus der Datenbank einen String zurückbekommen der UTF-8 war aber falsch kodiert zurückgegeben wurde. So wurde aus Geschäftsstelle -> Gesch├ñftsstelle

Folgendes Snippet kann einen kaputten UTF-8 String neu auf UTF-8 kodieren:

name_kaputt = 'Gesch\xc3\xa4ftsstelle'

name = ''.join(chr(ord(c)) for c in name_kaputt).decode("utf-8")

print name_kaputt
print name

Ergebnis:

Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> name_kaputt = 'Gesch\xc3\xa4ftsstelle'
>>> name = ''.join(chr(ord(c)) for c in name_kaputt).decode("utf-8")
>>> print name_kaputt
Geschäftsstelle
>>> print name
Geschäftsstelle

 

 

Python: Snippet – Datum / Zeitstempel älter als X Tage z.B. 90 Tage

Mit dem Typ datetime lässt sich direkt rechnen was das ganze sehr bequem macht.

Beispiel:

import datetime

old_time = datetime.datetime(2016, 4, 11, 10, 57, 23)
today = datetime.datetime.today()

age = today - old_time

if age.days > 90:
    print "Older than 90 days"
else:
    print "Not older than 90 days"

Beispiel Datei Alter:

import datetime
import os

file_mod_time = datetime.datetime.fromtimestamp(os.path.getmtime('foobar.txt'))
today = datetime.datetime.today()

age = today - file_mod_time

if age.days > 90:
    print "File older than 90 days"
else:
    print "File not older than 90 days"

 

Einfacher Random String Generator in Python

''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(N))

N = Anzahl der Stellen

z.B. Uppercase + Digits mit 16 Stellen:

OMD[dev1]:~$ python
Python 2.7.13 (default, Jul 24 2017, 12:14:45) 
[GCC 6.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information. 
>>> import string
>>> import random
>>> ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(16))
'J8J3D3UMASJ33B1M'

Quelle/Mehr: https://stackoverflow.com/questions/2257441/random-string-generation-with-upper-case-letters-and-digits-in-python

Subnetze und IP Adressen extrahieren aus SPF Records (z.B. Office365 oder Google Apps for Business)

Wenn man bei Office365 oder Google Apps for Business einen eigenen Mailserver (Postfix) vorschalten möchte beim versenden/empfangen muss man die Mailserver von Microsoft/Google Whitelisten in den mynetworks bei Postfix.

Das Script löst alle SPF Record includes auf und generiert CIDR Maps die sich in Postfix einbinden lassen.

Beispiel:

max@dev1:~$ python get_subnets_of_spf_record_mynetwoks.py
Working on job office365
Working on job google

Es werden 2 Files erzeugt:

max@dev1:~$ cat /etc/postfix/networks/google 
64.18.0.0/20 OK
64.233.160.0/19 OK
66.102.0.0/20 OK
66.249.80.0/20 OK
72.14.192.0/18 OK
74.125.0.0/16 OK
108.177.8.0/21 OK
173.194.0.0/16 OK
207.126.144.0/20 OK
209.85.128.0/17 OK
216.58.192.0/19 OK
216.239.32.0/19 OK
[2001:4860:4000::]/36 OK
[2404:6800:4000::]/36 OK
[2607:f8b0:4000::]/36 OK
[2800:3f0:4000::]/36 OK
[2a00:1450:4000::]/36 OK
[2c0f:fb50:4000::]/36 OK
172.217.0.0/19 OK
108.177.96.0/19 OK
max@dev1:~/test$ cat /etc/postfix/networks/office365
207.46.101.128/26 OK
207.46.100.0/24 OK
207.46.163.0/24 OK
65.55.169.0/24 OK
157.56.110.0/23 OK
157.55.234.0/24 OK
213.199.154.0/24 OK
213.199.180.0/24 OK
157.56.112.0/24 OK
207.46.51.64/26 OK
157.55.158.0/23 OK
64.4.22.64/26 OK
40.92.0.0/14 OK
40.107.0.0/17 OK
40.107.128.0/17 OK
134.170.140.0/24 OK
[2a01:111:f400::]/48 OK
23.103.128.0/19 OK
23.103.198.0/23 OK
65.55.88.0/24 OK
104.47.0.0/17 OK
23.103.200.0/21 OK
23.103.208.0/21 OK
23.103.191.0/24 OK
216.32.180.0/23 OK
94.245.120.64/26 OK
[2001:489a:2202::]/48 OK

In Posftix werden sie in der main.cf eingebunden:

# ----------------------------------------------------------------------
# My Networks
# ----------------------------------------------------------------------
mynetworks =
        cidr:/etc/postfix/networks/local
        cidr:/etc/postfix/networks/other
        cidr:/etc/postfix/networks/google
        cidr:/etc/postfix/networks/office365

Da sich zwischendurch die Records auch mal ändern können empfiehlt es sich einen Cronjob dafür einzurichten. Ich habe eine Variante mit diff die nur patcht wenn das Resultat nicht null ist.

Das Script lässt sich auch noch für andere Dienste / etc. anpassen:

lookup_spf = {
# Google Apps for Business
"google": {
          "domain": "google.com",
          "file"  : "/etc/postfix/networks/google",
          },

# Office365
"office365": {
          "domain": "spf.protection.outlook.com",
          "file"  : "/etc/postfix/networks/office365",
          },

# Example
"example": {
          "domain": "example.com",
          "file"  : "/etc/postfix/networks/example",
          },

}

Sourcecode:

#!/usr/bin/env python

#
# get_subnets_of_spf_record_mynetwoks.py
# Resolve all known ip addresses from spf record and generate cidr map for postfix
#
# Version 1.0
# Written by Maximilian Thoma (http://www.lanbugs.de)
#
# The generated files can be used in postfix config with for example mynetworks = cidr:/etc/postfix/<generated_file>
#
# This program is free software; you can redistribute it and/or modify it under the terms of the
# GNU General Public License as published by the Free Software Foundation;
# either version 2 of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with this program;
# if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, 
# MA 02110, USA
#

#
# Requirements:
# dnspython module  -> pip install dnspython
#

import dns.resolver
from dns.exception import DNSException
import re
import sys

# Look for DNS Record at:
#
# "jobname": {
#            "domain": "domainname",
#            "file": "output_file",
#            }
#

# 

lookup_spf = {
# Google Apps for Business
"google": {
          "domain": "google.com",
          "file"  : "/etc/postfix/networks/google",
          },

# Office365
"office365": {
          "domain": "spf.protection.outlook.com",
          "file"  : "/etc/postfix/networks/office365",
          },
}

############################################################################################

def getspf(record, filehandler):
    # Init Resolver
    myResolver = dns.resolver.Resolver()

    try:
        # Try to lookup TXT record
        myAnswer = myResolver.query(record,"TXT")

    except DNSException:
        sys.stderr.write("Failed to query record, SPF broken.")
        return

    results = []

    for rdata in myAnswer:
        # Get string out of records
        for txt_string in rdata.strings:
            # Append to SPF Records buffer if "spf" in string
            if "spf" in txt_string:
                results.append(txt_string)

    # If results >=1
    if len(results) >= 1:
        # Work on records
        for spf in results:
            # Split parts
            parts = spf.split(" ")
            # Check parts
            for part in parts:

                s_include = re.match(r"^include:(?P<domain>.*)$", part)
                s_ip4 = re.match(r"^ip4:(?P<ip4>.*)$", part)
                s_ip6 = re.match(r"^ip6:(?P<ip6>.*)$", part)

                # If in part "include" found, next round
                if s_include:
                    getspf(s_include.group('domain'), filehandler)
                # elif ip4 found
                elif s_ip4:
                    filehandler.write(s_ip4.group('ip4') + " OK\n")
                # elif ip6 found
                elif s_ip6:
                    filehandler.write("[" + s_ip6.group('ip6').replace("/","]/") + " OK\n")
                # else no valid record
                else:
                    pass
    # no results 
    else:
        sys.stderr.write("No results")
        pass

def main():
    # Working on jobs
    for jobname, config in lookup_spf.iteritems():

        print "Working on job %s" % jobname

        # open file
        filehandler = open(config['file'], 'w')
        # start query spf records
        getspf(config['domain'], filehandler)
        # close file
        filehandler.close()


#getspf(lookup_spf)

if __name__ == "__main__":
    main()

 

Python: Snippet – Python Code aus Textdateien ausführen/importieren

Check_MK speichert alle Daten in einfachen Dateien direkt als ausführbaren Python Code.

Um die abgelegten Dictionarys etc. in seinen eigenen Skripten weiterverwenden zu können kann man sich die mit eval() oder exec() laden.

eval() kann verwendet werden um z.B.  ein Dictionary in eine Variable zu laden, exec() kann auch ganze Funktionen etc. laden.

Beispiel eval():

dict.txt

{"foo":"bar","aaa":"bbb"}

import_dict.py

#!/usr/bin/env python

with open("dict1.txt","r") as f:
    x = eval(f.read().replace("\n",""))

print x

print x['foo']

Ergebnis:

max@cmkdevel:~/dev$ python import_dict.py
{'foo': 'bar', 'aaa': 'bbb'}
bar

Beispiel exec()

code.txt

max['foo'] = {
             "foo": "bar",
             "fxx": "boo",
             }


def hello(name):
    print "Hallo " + name

import_code.py

#!/usr/bin/env python

max = {}

with open("code.txt","r") as f:
    exec(f.read())


print max
print max['foo']
print max['foo']['foo']
hello("max")

Ergebnis:

max@cmkdevel:~/dev$ python import_code.py 
{'foo': {'foo': 'bar', 'fxx': 'boo'}}
{'foo': 'bar', 'fxx': 'boo'}
bar
Hallo max

 

 

Python: Snippet – Speichern und weiterverwenden von Objekten

Mit dem Modul pickle (deutsch pökeln, konservieren) bietet Funktionen für das Speichern von Objekten. Die gespeicherten Objekte können wiederhergestellt werden. Die Daten werden als Byte Stream gespeichert.

Folgende Datentypen werden unterstützt:

Hier ein kleines Beispiel:

dict_to_file.py

#!/usr/bin/env python

import pickle

t = {
"sname": "Foo",
"lname": "Bar",
"street": "Foo Street",
"city": "Bar City"
}

with open("test.pkl","wb") as f:
    pickle.dump(t, f, pickle.HIGHEST_PROTOCOL)

file_to_dict.py

#!/usr/bin/env python

import pickle
import pprint

with open("test.pkl","rb") as f:
    t = pickle.load(f)

pprint.pprint(t)

Ausgabe von file_to_dict.py

max@cmkdevel:~$ python file_to_dict.py 
{'city': 'Bar City', 'lname': 'Bar', 'sname': 'Foo', 'street': 'Foo Street'}

 

Python: Snippet – Suchen und ersetzen in Dateien

Der Titel des Posts sagt eigentlich schon alles 😉

Python 3:

#!/usr/bin/env python3

import fileinput
import re

file = fileinput.FileInput("/etc/ssh/sshd_config", inplace=True, backup=".bak")

for line in file:
    line = re.sub(r".*Banner.*","Banner /etc/issue.net", line)
    print(line, end='')

file.close()

Python 2:

#!/usr/bin/env python

import fileinput
import re
import sys

file = fileinput.FileInput("/etc/ssh/sshd_config", inplace=True, backup=".bak")

for line in file:
    line = re.sub(r".*Banner.*","Banner /etc/issue.net", line)
    sys.stdout.write(line)

file.close()

 

Python: Snippet – Threading mit Result

Code-Snippet:

#!/usr/bin/env python

import socket
from multiprocessing.pool import ThreadPool
import pprint


jobs = ("www.heise.de","www.google.com","www.golem.de","www.google.de","www.lanbugs.de","www.microsoft.com")

def worker(domain):
    print socket.gethostbyname(domain)
    return socket.gethostbyname(domain)

pool = ThreadPool(processes=3)

result_buffer = {}

for d in jobs:
    print "start " + d
    async_result = pool.apply_async(worker, args=(d,))
    result_buffer[d]=async_result.get()


pprint.pprint(result_buffer)


Ausgabe:

>python thread_with_result.py 
start www.heise.de
193.99.144.85
start www.google.com
172.217.20.68
start www.golem.de
109.68.230.138
start www.google.de
172.217.20.99
start www.lanbugs.de
81.169.181.94
start www.microsoft.com
104.108.168.41
{'www.golem.de': '109.68.230.138',
 'www.google.com': '216.58.207.68',
 'www.google.de': '172.217.20.99',
 'www.heise.de': '193.99.144.85',
 'www.lanbugs.de': '81.169.181.94',
 'www.microsoft.com': '104.108.168.41'}

 

Python: Oracle DB Modul für Python für CentOS6

Quelle: https://gist.github.com/hangtwenty/5547377

#!/bin/bash

# INSTALL ORACLE INSTANT CLIENT #
#################################

# NOTE: Oracle requires at least 1176 MB of swap (or something around there).
# If you are using CentOS in a VMWare VM, there's a good chance that you don't have enough by default.
# If this describes you and you need to add more swap, see the
# "Adding a Swap File to a CentOS System" section, here:
# http://www.techotopia.com/index.php/Adding_and_Managing_CentOS_Swap_Space

# Install basic dependencies
sudo yum -y install libaio bc flex

echo "Now go get some the following two RPMs ..."
echo "- basic: oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm"
echo "- SDK/devel: oracle-instantclient11.2-devel-11.2.0.3.0-1.x86_64.rpm"
echo "... from this URL: http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html"
echo "WARNING: It's pretty annoying, they make you sign up for an Oracle account, etc."
echo 'I will assume you have put these two files are into ~/Downloads'
echo "Press any key once you're ready" && read -n 1 -s

sudo rpm -ivh ~/Downloads/oracle-instantclient11.2-basic-*
sudo rpm -ivh ~/Downloads/oracle-instantclient11.2-devel-*

# SET ENVIRONMENT VARIABLES #
#############################

# Source for this section: http://cx-oracle.sourceforge.net/BUILD.txt

# (SIDENOTE: I had to alter it by doing some digging around for where the Oracle RPMs really installed to;
# if you ever need to do this, do a command like this:
#     rpm -qlp <rpm_file_of_concern.rpm>)

echo '# Convoluted undocumented Oracle bullshit.' >> $HOME/.bashrc
echo 'export ORACLE_VERSION="11.2"' >> $HOME/.bashrc
echo 'export ORACLE_HOME="/usr/lib/oracle/$ORACLE_VERSION/client64/"' >> $HOME/.bashrc
echo 'export PATH=$PATH:"$ORACLE_HOME/bin"' >> $HOME/.bashrc
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"$ORACLE_HOME/lib"' >> $HOME/.bashrc
. $HOME/.bashrc

# INSTALL cx_Oracle #
#####################

pip install cx_Oracle

Gute Anleitung zur Verwendung von cs_Oracle: http://www.oracle.com/technetwork/articles/dsl/prez-python-queries-101587.html

Beispiel:

import cx_Oracle

# db helper named arrays
def rows_to_dict_list(cursor):
    columns = [i[0] for i in cursor.description]
    return [dict(zip(columns, row)) for row in cursor]

# Connect to DB
dsn_tns = cx_Oracle.makedsn("10.10.10.1",1521,"TESTDB")
db = cx_Oracle.connect("testuser","password",dsn_tns)
cursor = db.cursor()

# Get data from DB
cursor.execute("SELECT * FROM test_tab")
result = rows_to_dict_list(cursor)

# Insert to DB
cursor.execute('INSERT INTO test_tab (row1, row2, row2) VALUES ("xxx", "yyy", "zzz")')
db.commit()

# close db
db.close()

 

Python: Snippet Argumente für Command Line Tools mit getopt oder argparse

Mein persönlicher Favorit ist argparse, der Vollständigkeit halber aber beide Lösungen. getopt und argpase sind beide bei Python dabei und müssen nicht nachinstalliert werden.

GETOPT Ansatz:

#!/usr/bin/env python

import getopt
import sys

def usage():
    print "test1.py - A test script.\n" \
          " -p, --print Return a string \n" \
          " -h, --help Help"


def main():
    try:
        opts, args = getopt.getopt(sys.argv[1:], "p:h", ['print=', 'help'])

    except getopt.GetoptError as err:
        print str(err)
        sys.exit(2)

    for o, a in opts:
        if o in ('-p', '--print'):
            string_to_print = a

        if o in ('-h', '--help'):
            usage()
            sys.exit(2)

    if not 'string_to_print' in locals():
        print "-p or --print is not given or string is missing\n"
        usage()
        sys.exit(2)

    print string_to_print


if __name__ == "__main__":
    main()

ARGPARSE Ansatz:

#!/usr/bin/env python

import argparse

def main():
        parser = argparse.ArgumentParser(description="test3.py - A test script.")
        parser.add_argument('-p','--print',dest='string_to_print', required=True, help="String to print")
        args = parser.parse_args()

        print args.string_to_print

if __name__ == "__main__":
        main()

 

Python: Snippet Multiprocessing

Wenn es möglich ist Jobs zu parallelisieren kann man Multiprocessing unter Python verwenden.

#!/usr/bin/env python

import os
from multiprocessing import Pool



def worker(job):
    x, y = job

    result = x ** y

    if hasattr(os, 'getppid'):
        print "parent process pid:", os.getppid()
    print "process pid:", os.getpid()

    print "result is: ", result
    print "---"


if __name__ == '__main__':
    jobs = [(1, 2), (3, 4), (5, 6), (11, 12), (13, 14), (15, 16), (21, 22), (23, 24), (25, 26)]
    pool = Pool(processes=5)

    for job in jobs:
        pool.apply_async(worker, args=(job,))

    pool.close()
    pool.join()

Result:

max@cmkdev:~$ python mp.py 
parent process pid: 19599
process pid: 19600
result is:  1
---
parent process pid: 19599
process pid: 19601
result is:  81
---
parent process pid: 19599
process pid: 19602
result is:  15625
---
parent process pid: 19599
process pid: 19602
result is:  3138428376721
---
parent process pid: 19599
process pid: 19600
result is:  6568408355712890625
---
parent process pid: 19599
process pid: 19600
result is:  122694327386105632949003612841
---
parent process pid: 19599
process pid: 19600
result is:  480250763996501976790165756943041
---
parent process pid: 19599
process pid: 19602
result is:  2220446049250313080847263336181640625
---
parent process pid: 19599
process pid: 19604
result is:  3937376385699289
---

 

Python: Snippet – In einer Datei suchen und Zeilennummern zurückgeben

test.txt in der Gesucht wird nach foobar:

wer
w
erw
erwer
foobar
sfsdfhsdkjfhkjsdf
sdf
sdf
sdf
sdf
sdf
sdflskdjflsdjflksjflksjf
sdfkjsdfjkhskjhffoobardjskfhskdjhfkjsdhfkjshdf
sflksdjfjklsdfjs
dfs
dfs
df
sdf
sdf
dsf

Testscript zum Suchen:

#!/usr/bin/env python

filename = 'test.txt'
search = 'foobar'

with open(filename) as f:
    for num, line in enumerate(f, 1):
        if search in line:
            print '%s - found at line:' % search, num



Resultat:

dev1@cmkdev1:/home/dev1$ python test.py 
foobar - found at line: 5
foobar - found at line: 13