Quantcast
Channel: Questions in topic: "indexes.conf"
Viewing all 236 articles
Browse latest View live

How to configure a new splunk instance to search previously indexed data stored on s3?

$
0
0
I have previously indexed data uploaded to an s3 bucket. I installed Splunk (full version) on an EC2 (RHEL7). I (persistently) mounted the s3 bucket to the EC2 instance (with FUSE). I can see all the data when I change to my_s3fs_mount_directory, (e.g. /my_s3fs_mount_directory/index_name/db_1234567_123456_1234/rawdata/journal.gz) My question is how I should edit the indexes.conf correctly, so that my new indexer sees this data and doesn't accidentally overwrite the existing data in my path by accident. Here is what I have so far... [myindex] homePath = $SPLUNK_DB/my_s3fs_mount_directory/index_namedb/db coldPath = $SPLUNK_DB/my_s3fs_mount_directory/index_namedb/colddb thawedPath = $SPLUNK_DB/my_s3fs_mount_directory/index_namedb/thaweddb maxDataSize = 10000 maxHotBuckets = 10 Is there anything else I need to do or another conf I would also need to edit? Any advice is appreciated. Thank you

has anyone successful setup the remotePath option in indexes.conf in Splunk 7.0 to work with indexed data in s3?

$
0
0
I was trying to search copies of indexed data in S3. Has anyone had luck with this scenario using remotePath??? I know it says not supported but is it functional at this point? indexes.conf for splunk 7.0 [https://docs.splunk.com/Documentation/Splunk/7.0.1/Admin/Indexesconf] remotePath = * Currently not supported. This setting is related to a feature that is still under development. * Optional. * Presence of this parameter means that this index uses remote storage, instead of the local file system, as the main repository for bucket storage. The index processor works with a cache manager to fetch buckets locally, as necessary, for searching and to evict them from local storage as space fills up and they are no longer needed for searching. * This setting must be defined in terms of a storageType=remote volume definition. See the volume section below. * The path portion that follows the volume reference is relative to the path specified for the volume. For example, if the path for a volume "v1" is "s3://bucket/path" and "remotePath" is "volume:v1/idx1", then the fully qualified path will be "s3://bucket/path/idx1". The rules for resolving the relative path with the absolute path specified in the volume can vary depending on the underlying storage type. * If "remotePath" is specified, the "coldPath" and "thawedPath" attributes are ignored. However, they still must be specified. Any advise or lessons learned is appreciated. Thank you

Understanding Indexes.conf

$
0
0
Hello guys, I would like to understand if i have any misconfiguration on my indexes files, and for how long do i keep logs online, archived and when they are deleted (since my HDD is getting full quickly): [default] suppressBannerList = frozenTimePeriodInSecs = 15778463 throttleCheckPeriod = 15 quarantineFutureSecs = 2592000 partialServiceMetaPeriod = 0 serviceOnlyAsNeeded = true maxHotBuckets = 3 enableOnlineBucketRepair = true bucketRebuildMemoryHint = auto maxRunningProcessGroups = 8 maxDataSize = auto maxWarmDBCount = 300 assureUTF8 = false maxHotIdleSecs = 0 enableRealtimeSearch = true serviceMetaPeriod = 25 repFactor = 0 maxConcurrentOptimizes = 3 maxHotSpanSecs = 7776000 maxTimeUnreplicatedWithAcks = 60 syncMeta = true coldToFrozenDir = maxRunningProcessGroupsLowPriority = 1 serviceSubtaskTimingPeriod = 30 quarantinePastSecs = 77760000 rawChunkSizeBytes = 131072 sync = 0 maxBucketSizeCacheEntries = 1000000 coldToFrozenScript = "/opt/splunk/bin/python" "/opt/splunk/bin/coldToFrozen.py" rotatePeriodInSecs = 60 memPoolMB = auto defaultDatabase = main enableDataIntegrityControl = true minRawFileSyncSecs = disable compressRawdata = true maxMetaEntries = 1000000 maxBloomBackfillBucketAge = 30d maxTotalDataSizeMB = 500000 maxTimeUnreplicatedNoAcks = 300 [_audit] coldPath = $SPLUNK_DB/audit/colddb homePath = $SPLUNK_DB/audit/db thawedPath = $SPLUNK_DB/audit/thaweddb [_internal] frozenTimePeriodInSecs = 2419200 homePath = $SPLUNK_DB/_internaldb/db thawedPath = $SPLUNK_DB/_internaldb/thaweddb maxDataSize = 100 coldPath = $SPLUNK_DB/_internaldb/colddb [_thefishbucket] frozenTimePeriodInSecs = 2419200 homePath = $SPLUNK_DB/fishbucket/db thawedPath = $SPLUNK_DB/fishbucket/thaweddb maxDataSize = 10 coldPath = $SPLUNK_DB/fishbucket/colddb [history] frozenTimePeriodInSecs = 604800 homePath = $SPLUNK_DB/historydb/db thawedPath = $SPLUNK_DB/historydb/thaweddb maxDataSize = 10 coldPath = $SPLUNK_DB/historydb/colddb [main] maxDataSize = auto_high_volume homePath = $SPLUNK_DB/defaultdb/db maxHotBuckets = 10 coldPath = $SPLUNK_DB/defaultdb/colddb maxHotIdleSecs = 86400 maxConcurrentOptimizes = 6 thawedPath = $SPLUNK_DB/defaultdb/thaweddb [splunklogger] coldPath = $SPLUNK_DB/splunklogger/colddb disabled = true homePath = $SPLUNK_DB/splunklogger/db thawedPath = $SPLUNK_DB/splunklogger/thaweddb [summary] coldPath = $SPLUNK_DB/summarydb/colddb homePath = $SPLUNK_DB/summarydb/db thawedPath = $SPLUNK_DB/summarydb/thaweddb

I am getting error from tailreader and file is not being auto indexed

$
0
0
I have a .csv that was dropped in an auto index folder and I am getting this error: -0500 ERROR TailReader - Ignoring path="X" due to: Bug during applyPendingMetadata, header processor does not own the indexed extractions conf I did not change my process (another file was dropped here this morning and it worked fine) and this file doesn't look any different- do you know why this is happening/how to fix? Thank you!

Why is no data coming into the summary index but I can see data in the internal index?

$
0
0
I just installed splunk into the lab environment. I installed a deployment server and a universal forwarder to a win10 machine (risenlab1). I couldn't get any data so I turned off all firewall. then all of a sudden 272 event came out. ![alt text][1] The only problem was no more data was coming in. Then I stumbled upon this command: index=_internal | stats count by host ![alt text][2] this search showed that data is coming in to the internal index... I'm not quite sure what an internal index is and what this means... and I also noticed that data is being collected from another host machine (risen300) to the internal index. I did not install a universal forwarder to that host so how is data being forwarded from it?? below are my conf files ** - indexes.conf ** [msad] homePath = $SPLUNK_DB/msad/db coldPath = $SPLUNK_DB/msad/colddb thawedPath = $SPLUNK_DB/msad/thaweddb maxDataSize = 10000 maxHotBuckets = 10 [perfmon] homePath = $SPLUNK_DB/perfmon/db coldPath = $SPLUNK_DB/perfmon/colddb thawedPath = $SPLUNK_DB/perfmon/thaweddb maxDataSize = 10000 maxHotBuckets = 10 [wineventlog] homePath = $SPLUNK_DB/wineventlog/db coldPath = $SPLUNK_DB/wineventlog/colddb thawedPath = $SPLUNK_DB/wineventlog/thaweddb maxDataSize = 10000 maxHotBuckets = 10 ** - outputs.conf ** [tcpout] defaultGroup=default-autolb-group [tcpout:default-autolb-group] server=192.168.0.37:9997 [tcpout-server://192.168.0.37:9997] *** - Input.conf *** # Copyright (C) 2009-2016 Splunk Inc. All Rights Reserved. # DO NOT EDIT THIS FILE! # Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. # To make changes, copy the section/stanza you want to change from $SPLUNK_HOME/etc/apps/Splunk_TA_windows/default # into ../local and edit there. # [default] evt_dc_name = evt_dns_name = ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = wineventlog renderXml=false [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" index = wineventlog renderXml=false [WinEventLog://System] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = wineventlog renderXml=false ####### OS Logs (Splunk 5.x only) ###### # If you are running Splunk 5.x remove the above OS log stanzas and uncomment these three. #[WinEventLog:Application] #disabled = 0 #start_from = oldest #current_only = 0 #checkpointInterval = 5 #index = wineventlog # #[WinEventLog:Security] #disabled = 0 #start_from = oldest #current_only = 0 #evt_resolve_ad_obj = 1 #checkpointInterval = 5 #index = wineventlog # #[WinEventLog:System] #disabled = 0 #start_from = oldest #current_only = 0 #checkpointInterval = 5 #index = wineventlog ###### DHCP ###### [monitor://$WINDIR\System32\DHCP] disabled = 0 whitelist = DhcpSrvLog* crcSalt = sourcetype = DhcpSrvLog index = windows ###### Windows Update Log ###### [monitor://$WINDIR\WindowsUpdate.log] disabled = 0 sourcetype = WindowsUpdateLog index = windows ###### Scripted Input (See also wmi.conf) [script://.\bin\win_listening_ports.bat] disabled = 0 ## Run once per hour interval = 3600 sourcetype = Script:ListeningPorts index = windows [script://.\bin\win_installed_apps.bat] disabled = 0 ## Run once per day interval = 86400 sourcetype = Script:InstalledApps index = windows ###### Host monitoring ###### [WinHostMon://Computer] interval = 600 disabled = 0 type = Computer index = windows [WinHostMon://Process] interval = 600 disabled = 0 type = Process index = windows [WinHostMon://Processor] interval = 600 disabled = 0 type = Processor index = windows [WinHostMon://Application] interval = 600 disabled = 0 type = Application index = windows [WinHostMon://NetworkAdapter] interval = 600 disabled = 0 type = NetworkAdapter index = windows [WinHostMon://Service] interval = 600 disabled = 0 type = Service index = windows [WinHostMon://OperatingSystem] interval = 600 disabled = 0 type = OperatingSystem index = windows [WinHostMon://Disk] interval = 600 disabled = 0 type = Disk index = windows [WinHostMon://Driver] interval = 600 disabled = 0 type = Driver index = windows [WinHostMon://Roles] interval = 600 disabled = 0 type = Roles index = windows ###### Print monitoring ###### [WinPrintMon://printer] type = printer interval = 600 baseline = 1 disabled = 0 index = windows [WinPrintMon://driver] type = driver interval = 600 baseline = 1 disabled = 0 index = windows [WinPrintMon://port] type = port interval = 600 baseline = 1 disabled = 0 index = windows ###### Network monitoring ###### [WinNetMon://inbound] direction = inbound disabled = 0 index = windows [WinNetMon://outbound] direction = outbound disabled = 0 index = windows ###### Splunk 5.0+ Performance Counters ###### ## CPU [perfmon://CPU] counters = % Processor Time; % User Time; % Privileged Time; Interrupts/sec; % DPC Time; % Interrupt Time; DPCs Queued/sec; DPC Rate; % Idle Time; % C1 Time; % C2 Time; % C3 Time; C1 Transitions/sec; C2 Transitions/sec; C3 Transitions/sec disabled = 0 instances = * interval = 10 object = Processor useEnglishOnly=true index = perfmon ## Logical Disk [perfmon://LogicalDisk] counters = % Free Space; Free Megabytes; Current Disk Queue Length; % Disk Time; Avg. Disk Queue Length; % Disk Read Time; Avg. Disk Read Queue Length; % Disk Write Time; Avg. Disk Write Queue Length; Avg. Disk sec/Transfer; Avg. Disk sec/Read; Avg. Disk sec/Write; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec; Disk Bytes/sec; Disk Read Bytes/sec; Disk Write Bytes/sec; Avg. Disk Bytes/Transfer; Avg. Disk Bytes/Read; Avg. Disk Bytes/Write; % Idle Time; Split IO/Sec disabled = 0 instances = * interval = 10 object = LogicalDisk useEnglishOnly=true index = perfmon ## Physical Disk [perfmon://PhysicalDisk] counters = Current Disk Queue Length; % Disk Time; Avg. Disk Queue Length; % Disk Read Time; Avg. Disk Read Queue Length; % Disk Write Time; Avg. Disk Write Queue Length; Avg. Disk sec/Transfer; Avg. Disk sec/Read; Avg. Disk sec/Write; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec; Disk Bytes/sec; Disk Read Bytes/sec; Disk Write Bytes/sec; Avg. Disk Bytes/Transfer; Avg. Disk Bytes/Read; Avg. Disk Bytes/Write; % Idle Time; Split IO/Sec disabled = 0 instances = * interval = 10 object = PhysicalDisk useEnglishOnly=true index = perfmon ## Memory [perfmon://Memory] counters = Page Faults/sec; Available Bytes; Committed Bytes; Commit Limit; Write Copies/sec; Transition Faults/sec; Cache Faults/sec; Demand Zero Faults/sec; Pages/sec; Pages Input/sec; Page Reads/sec; Pages Output/sec; Pool Paged Bytes; Pool Nonpaged Bytes; Page Writes/sec; Pool Paged Allocs; Pool Nonpaged Allocs; Free System Page Table Entries; Cache Bytes; Cache Bytes Peak; Pool Paged Resident Bytes; System Code Total Bytes; System Code Resident Bytes; System Driver Total Bytes; System Driver Resident Bytes; System Cache Resident Bytes; % Committed Bytes In Use; Available KBytes; Available MBytes; Transition Pages RePurposed/sec; Free & Zero Page List Bytes; Modified Page List Bytes; Standby Cache Reserve Bytes; Standby Cache Normal Priority Bytes; Standby Cache Core Bytes; Long-Term Average Standby Cache Lifetime (s) disabled = 0 interval = 10 object = Memory useEnglishOnly=true index = perfmon ## Network [perfmon://Network] counters = Bytes Total/sec; Packets/sec; Packets Received/sec; Packets Sent/sec; Current Bandwidth; Bytes Received/sec; Packets Received Unicast/sec; Packets Received Non-Unicast/sec; Packets Received Discarded; Packets Received Errors; Packets Received Unknown; Bytes Sent/sec; Packets Sent Unicast/sec; Packets Sent Non-Unicast/sec; Packets Outbound Discarded; Packets Outbound Errors; Output Queue Length; Offloaded Connections; TCP Active RSC Connections; TCP RSC Coalesced Packets/sec; TCP RSC Exceptions/sec; TCP RSC Average Packet Size disabled = 0 instances = * interval = 10 object = Network Interface useEnglishOnly=true index = perfmon ## Process [perfmon://Process] counters = % Processor Time; % User Time; % Privileged Time; Virtual Bytes Peak; Virtual Bytes; Page Faults/sec; Working Set Peak; Working Set; Page File Bytes Peak; Page File Bytes; Private Bytes; Thread Count; Priority Base; Elapsed Time; ID Process; Creating Process ID; Pool Paged Bytes; Pool Nonpaged Bytes; Handle Count; IO Read Operations/sec; IO Write Operations/sec; IO Data Operations/sec; IO Other Operations/sec; IO Read Bytes/sec; IO Write Bytes/sec; IO Data Bytes/sec; IO Other Bytes/sec; Working Set - Private disabled = 0 instances = * interval = 10 object = Process useEnglishOnly=true index = perfmon ## System [perfmon://System] counters = File Read Operations/sec; File Write Operations/sec; File Control Operations/sec; File Read Bytes/sec; File Write Bytes/sec; File Control Bytes/sec; Context Switches/sec; System Calls/sec; File Data Operations/sec; System Up Time; Processor Queue Length; Processes; Threads; Alignment Fixups/sec; Exception Dispatches/sec; Floating Emulations/sec; % Registry Quota In Use disabled = 0 instances = * interval = 10 object = System useEnglishOnly=true index = perfmon [admon://default] disabled = 0 monitorSubtree = 1 [WinRegMon://default] disabled = 0 hive = .* proc = .* type = rename|set|delete|create index = windows [WinRegMon://hkcu_run] disabled = 0 hive = \\REGISTRY\\USER\\.*\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\\.* proc = .* type = set|create|delete|rename index = windows [WinRegMon://hklm_run] disabled = 0 hive = \\REGISTRY\\MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\\.* proc = .* type = set|create|delete|rename index = windows [1]: /storage/temp/227701-ts1.jpg [2]: /storage/temp/227702-ts2.jpg

Setting up Multisite Cluster, why can't Cluster Peers (indexers) start?

$
0
0
Obviously, this is a complex task, please only respond if you have high confidence in the nature of the error I'm receiving. I don't want to go on a wild goose chase. Version 6.6.2. I'm setting up a new multisite indexing cluster (I've done this before during a professional services engagement), and following Splunk docs very closely on setting up clusters and multisite clusters. I've fully read all the docs related to these topics several times over, and feel I have a very high understanding of the tasks to be completed. However, I'm running into an error which is not allowing the cluster peers to start. I will post the error at the end, due to it's length. I've configured the master node: http://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Configuremasterwithserverconf and http://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Multisiteconffile The master node is online and waiting for the cluster peers (indexers) to come online, just as the documentation said it would. I've also configured the peer nodes: http://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Configurepeerswithserverconf and http://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/Multisiteconffile Now, when I attempt to start the peer nodes, I get the errors in splunkd.log, and splunkd won't start. I've attempted many ways to define `repFactor=auto or repFactor=0`, but really the error makes no sense to me. Following the directions in the error has not made any difference. The same error occurs if master-apps on the cluster master is empty, or has appropriate indexes.conf files. Thanks for any help. Error when attempting to start cluster peer node (indexer): 02-21-2018 11:15:36.652 -0500 ERROR CMBundleMgr - Download bundle failed, err="App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication." 02-21-2018 11:15:37.652 -0500 ERROR CMSlave - event=getActiveBundle failed with err="App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication." even after multiple attempts, Exiting.. 02-21-2018 11:15:37.653 -0500 ERROR loader - Failed to download bundle from master, err="App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.,App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or specify repFactor=0 on peer to skip replication.", Won't start splunkd.

Ideal indexes.conf

$
0
0
Hi, Can I please know the ideal configurations for indexes.conf ? Should we include parameters like ```homePath.maxDataSizeMB``` , ``` coldPath.maxDataSizeMB``` etc. ? Or is it enough to specify only ``` frozenTimePeriodInSecs ```

How to create indexes in Splunk cloud using rest API ?

$
0
0
I need to automate a new deployment at our end and for Splunk monitoring to be automated need to make a rest call to create index in Cloud on the fly. Is that doable ?

How to create indexes in Splunk cloud using rest API?

$
0
0
I need to automate a new deployment at our end and for Splunk monitoring to be automated need to make a rest call to create index in Cloud on the fly. Is that doable?

Why was the datamodel_summary folder created automatialy after I changed the Splunk_DB path?

$
0
0
Hi, I changed the path of index data from D drive to New drive. It was successful to change and possible to search logs in the new Drive. But I found the same indexname folder in D drive and there is the "datamodel_summary" folder. I would like to change only the index.So I would not like to change Splunk_DB in splunk-launch.conf. Is it happened not to change Splunk_DB in splunk-launch.conf? Also what should I do for Splunk to create the "datamodel_summary" folder in new Drive? I appreciate any suggestions. Environment Splunk : 6.5.3 OS : Windows server 2012 R2 splunk-launch.conf SPLUNK_DB=D:\splunk_data\splunk\ indexes.conf (Before) [test-index] coldPath = $SPLUNK_DB\test-index\colddb homePath = $SPLUNK_DB\test-index\db thawedPath = $SPLUNK_DB\test-index\thaweddb frozenTimePeriodInSecs = 34300800 coldToFrozenDir = D:\splunk_data\archive\test-index indexes.conf (After) [test-index] coldPath = E:\splunk_data\splunk\test-index\colddb homePath = E:\splunk_data\splunk\test-index\db thawedPath = E:\splunk_data\splunk\test-index\thaweddb frozenTimePeriodInSecs = 34300800 coldToFrozenDir = E:\splunk_data\archive\test-index This is the way to change: 1.Stop splunk service. 2.Move D:\splunk_data\splunk\i-filter to E:\splunk_data\splunk\i-filter. 3.Change the indexes.conf. 4.Start splunk service. Regards,

Anyone able to help me tune my indexes.conf?

$
0
0
All, Sorry guys, don't do this much and the docs are not giving me the warm and fuzzy's about about how to do this. I'd like to take advantage of the fact I have pretty snappy local disks on my new servers to keep a day or two logs events there before rolling to cold. Never really done this before. Where I am running into trouble here is that config for the hot/warm. What's the setting here to get Splunk rolling from FASTDISk to BIGDISk isntead of just running out of space and crashing? Does Splunk just "know" as it appraoches 700gigs to start rolling? I assume this would be volume level setting but not seeing it. # 12TB store [volume:bigdisk] path = /data maxVolumeDataSizeMB = 110000000 # 1TB local SSDs [volume:fastdisk] path = /splunk_local maxVolumeDataSizeMB = 700000 [default] # 10gig buckets maxDataSize = auto_high_volume # Company requires min 120 day logs available frozenTimePeriodInSecs=36820000 # Hot/Warm - should just roll to cold when fastdisk is low on space homePath = volume:fastdisk/$_index_name/db homePath.maxDataSizeMB = 700000 # Cold - should drop data when full - not crash coldPath = volume:bigdisk/$_index_name/colddb coldPath.maxDataSizeMB = 110000000 thawedPath = $SPLUNK_DB/$_index_name/thaweddb [main] [os] [windows]

data rebalance progresses is very poor or getting stuck

$
0
0
On Splunk version 6.6.6 we have slow rebalance. Environment has whose 18 idx multi-site cluster shows data rebalance progresses about .05% per day. There is about 15-25k buckets per Indexer, RF and SF are met, there are no fixup tasks, their hot+cold are on Solid State, network is on 10gig fiber connection. We also cleared all excess buckets. any thoughts? What could be somethings blocking the progress?

customizing the indexes.conf to keep no cold data

$
0
0
Our requirement is that there is no cold data. Once the data comes in it will be keep warm for 90 days and then it will be moved to frozen directory. We have done sizing and we have 3.2TB for warm volume and 3.8TB for frozen volume for all indexes. indexing is 60GB/day. here is my stanza : [firewall] homePath = volume:primary/firewall/db maxHotBuckets = 3 maxTotalDataSizeMB = 204800 enableDataIntegrityControl = 0 enableTsidxReduction = 0 maxWarmDBCount = 300 coldPath = volume:primary/firewall/colddb frozenTimePeriodInSecs = 7776000 coldToFrozenDir = "/splunk_frozen/frozen_logs/firewall" thawedPath = $SPLUNK_DB/firewall/thaweddb do i still need to define coldDB address ? my coldToFrozenDir is giving error where as the directory exists with 'wr' permissions. what is the best approach to achieve this.

User can see data in one index but not another with the same config

$
0
0
I've recently added some configuration that creates indexes for data. Each index has a corresponding role that adds both access to and search-by-default for the defined index. Let's suppose one index is called 'testing' and the other is called 'weblogs'. Users in the 'testing' role can see data in the 'testing' index, and users in the 'weblogs' role can see data in the 'weblogs' index. However, a user in _only_ the `admin` role, for which the allowed indexes are "all non-internal indexes", can see data in 'testing' but NOT in 'weblogs'. The config files are generated from the same template, and `btool` on search heads and indexers shows that they are the same except for the index/role name. I've yet to have any luck searching up a reason why this is the case. I'm okay with either outcome, but I don't understand why one index is behaving one way, and the other is behaving differently. How can I tell what's causing the difference?

Configuring Splunk for Multiple Indexer Partitions

$
0
0
I am not sure how to configure the indexes.conf AND the splunk-launch.conf. I understand multiple volumes in indexes.conf, such as: [volume:hotwarm] path = /splunkindexes/hot [volume:cold] path = /splunkdata and using this in the index definition in indexes.conf: [main] homePath = volume:hotwarm/defaultdb/db coldPath = volume:cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb repFactor = auto I have the splunk-launch.conf set as: SPLUNK_HOME=/splunk SPLUNK_DB = /splunkindexes/hot setting the SPLUNK_DB to the hotwarm volume. I am not seeing the cold volume data. I just made these changes, and have the hot to cold rollover set "disk size", so it will stay hot until the disk is close to full then it will roll to cold. Any ideas on way I cannot see the existing cold data? Thanks...

How to find the list of indexes and source types in specific app?

$
0
0
I have a different kind of access called ELEVATED ACCESS in splunk enterprise which is below the POWER USER but higher than the USER, with different apps installed. I have only one app in that. Is there a way to identify the list of available indexes and source types that is used in my app?

why are my configurations not working even after reboot?

$
0
0
The log files I'm working with are using the log4j syntax, and I'm loading them into splunk through the GUI (not real-time monitoring) So that I don't need to update the inputs.conf file. I have customized the following configuration files : indexes.conf : [index_infodebug] homePath=$SPLUNK_DB/$_index_infodebug/db coldPath= $SPLUNK_DB/$_index_infodebug /colddb thawedPath=$SPLUNK_DB/$_index_infodebug /thaweddb frozenTimePeriodInSecs = 2628000 #1month #logs to be erased [index_testconf] homePath=$SPLUNK_DB/$_index_testconf /db coldPath= $SPLUNK_DB/$_index_testconf /colddb thawedPath=$SPLUNK_DB/$_index_testconf /thaweddb frozenTimePeriodInSecs = 2628000 #1 month coldToFrozenDir = my/archive/directory #logs to be retained transforms.conf: [infodebug_logs] REGEX = \d{3}\s*(INFO|DEBUG)\s*[[] DEST_KEY = _MetaData:Index FORMAT = index_infodebug [short_source] SOURCE_KEY = Metadata:Source REGEX = Windchill_\d{4}-\d\d-\d\d_\d+_\d+\.tgz:\.\/Windchill_\d{4}-\d\d-\d\d_\d+_\d+\/(?[0-9a-zA-Z._-]+log) (forget the caracters in italic) DEST_KEY = MetaData:Source props.conf: [testconf_sourcetype] ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = true BREAK_ONLY_BEFORE = \d\d?d\d:\d\d BREAK_ONLY_BEFORE_DATE = True CHARSET = UTF-8 DATETIME_CONFIG = /etc/datetime.xml DEPTH_LIMIT = 1000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = True TRANSFORMS = TRUNCATE = 10000 category = Application description = Output produced by any Java 2 Enterprise Edition (J2EE) application server using log4j detect_trailing_nulls = false maxDist = 75 pulldown_type = true TRANSFORMS-index = infodebug_logs TRANSFORMS-source = short_source Both regex are working : -the first aims at routing INFO and DEBUG events to the appropriate index, which is configured to erase them after 1 month. (while other logs are archived) - the second one is for the extraction of more readable source names. I've tested them with the REGEX command, so i know they fit my data. After the restart of the splunk server, i've put my data into splunk. My problem is that NEITHER both transforms NOR the archiving part are working. I've tried with 60 seconds for the test and nothing happened. The events are only parsed the right way, as I specified in props.conf. I would be glad if someone could help me with that issues, thanks!

which index volume should be more ?

$
0
0
i have upgraded my indexer to 2TB from 450GB to increase my data retention. Below is my current indexer volume configuration: hot volume : 70GB cold volume: 35GB should i increase my hot volume or cold volume.Please suggest

After updating my indexer to 2TB, which index volume should I increase?

$
0
0
i have upgraded my indexer to 2TB from 450GB to increase my data retention. Below is my current indexer volume configuration: hot volume : 70GB cold volume: 35GB Should i increase my hot volume or cold volume. Please suggest.

Changing cold path in indexes.conf in a Clustered Splunk Environment

$
0
0
Hi Team, Here is our scenario: Our current directory in our coldPath parameter in master-apps/org_all_indexes/local/indexes.conf is almost full in disk space. We are planning to change the coldPath and point it to a new directory with more disk space. Since we have a clustered environment, it is safe to just update the coldPath parameter in master-apps/org_all_indexes/local/indexes.conf? Else, what are the factors needed to consider first to avoid unnecessary repercussions and what are the best practices to migrate cold buckets into new directory?
Viewing all 236 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>